Dec 12 15:46:11 crc systemd[1]: Starting Kubernetes Kubelet... Dec 12 15:46:11 crc restorecon[4686]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 15:46:11 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 15:46:12 crc restorecon[4686]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 15:46:12 crc restorecon[4686]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 12 15:46:13 crc kubenswrapper[4693]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 15:46:13 crc kubenswrapper[4693]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 12 15:46:13 crc kubenswrapper[4693]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 15:46:13 crc kubenswrapper[4693]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 15:46:13 crc kubenswrapper[4693]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 12 15:46:13 crc kubenswrapper[4693]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.037304 4693 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.040977 4693 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.040997 4693 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041006 4693 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041011 4693 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041017 4693 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041022 4693 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041028 4693 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041033 4693 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041039 4693 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041044 4693 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041049 4693 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041054 4693 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041059 4693 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041065 4693 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041070 4693 feature_gate.go:330] unrecognized feature gate: Example Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041075 4693 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041080 4693 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041085 4693 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041096 4693 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041101 4693 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041106 4693 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041111 4693 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041116 4693 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041122 4693 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041127 4693 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041132 4693 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041137 4693 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041142 4693 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041147 4693 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041152 4693 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041157 4693 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041162 4693 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041167 4693 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041172 4693 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041177 4693 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041185 4693 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041193 4693 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041201 4693 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041210 4693 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041218 4693 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041224 4693 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041230 4693 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041235 4693 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041241 4693 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041246 4693 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041251 4693 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041256 4693 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041261 4693 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041266 4693 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041276 4693 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041281 4693 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041304 4693 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041309 4693 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041314 4693 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041330 4693 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041336 4693 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041341 4693 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041346 4693 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041352 4693 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041357 4693 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041362 4693 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041367 4693 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041374 4693 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041379 4693 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041386 4693 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041392 4693 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041398 4693 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041404 4693 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041409 4693 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041415 4693 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.041421 4693 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041523 4693 flags.go:64] FLAG: --address="0.0.0.0" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041534 4693 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041544 4693 flags.go:64] FLAG: --anonymous-auth="true" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041552 4693 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041559 4693 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041566 4693 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041574 4693 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041581 4693 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041588 4693 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041594 4693 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041601 4693 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041607 4693 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041613 4693 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041619 4693 flags.go:64] FLAG: --cgroup-root="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041625 4693 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041631 4693 flags.go:64] FLAG: --client-ca-file="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041637 4693 flags.go:64] FLAG: --cloud-config="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041643 4693 flags.go:64] FLAG: --cloud-provider="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041649 4693 flags.go:64] FLAG: --cluster-dns="[]" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041657 4693 flags.go:64] FLAG: --cluster-domain="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041663 4693 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041669 4693 flags.go:64] FLAG: --config-dir="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041675 4693 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041681 4693 flags.go:64] FLAG: --container-log-max-files="5" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041689 4693 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041695 4693 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041701 4693 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041707 4693 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041713 4693 flags.go:64] FLAG: --contention-profiling="false" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041719 4693 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041725 4693 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041732 4693 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041737 4693 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041746 4693 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041751 4693 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041757 4693 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041764 4693 flags.go:64] FLAG: --enable-load-reader="false" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041770 4693 flags.go:64] FLAG: --enable-server="true" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041776 4693 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041784 4693 flags.go:64] FLAG: --event-burst="100" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041790 4693 flags.go:64] FLAG: --event-qps="50" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041796 4693 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041802 4693 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041808 4693 flags.go:64] FLAG: --eviction-hard="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041815 4693 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041821 4693 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041828 4693 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041834 4693 flags.go:64] FLAG: --eviction-soft="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041840 4693 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041846 4693 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041852 4693 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041858 4693 flags.go:64] FLAG: --experimental-mounter-path="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041864 4693 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041870 4693 flags.go:64] FLAG: --fail-swap-on="true" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041876 4693 flags.go:64] FLAG: --feature-gates="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041883 4693 flags.go:64] FLAG: --file-check-frequency="20s" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041890 4693 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041897 4693 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041904 4693 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041910 4693 flags.go:64] FLAG: --healthz-port="10248" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041916 4693 flags.go:64] FLAG: --help="false" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041922 4693 flags.go:64] FLAG: --hostname-override="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041928 4693 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041935 4693 flags.go:64] FLAG: --http-check-frequency="20s" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041941 4693 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041948 4693 flags.go:64] FLAG: --image-credential-provider-config="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041953 4693 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041960 4693 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041966 4693 flags.go:64] FLAG: --image-service-endpoint="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041972 4693 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041977 4693 flags.go:64] FLAG: --kube-api-burst="100" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041983 4693 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041990 4693 flags.go:64] FLAG: --kube-api-qps="50" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.041996 4693 flags.go:64] FLAG: --kube-reserved="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042002 4693 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042008 4693 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042014 4693 flags.go:64] FLAG: --kubelet-cgroups="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042020 4693 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042026 4693 flags.go:64] FLAG: --lock-file="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042032 4693 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042053 4693 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042060 4693 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042069 4693 flags.go:64] FLAG: --log-json-split-stream="false" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042076 4693 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042082 4693 flags.go:64] FLAG: --log-text-split-stream="false" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042088 4693 flags.go:64] FLAG: --logging-format="text" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042094 4693 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042100 4693 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042107 4693 flags.go:64] FLAG: --manifest-url="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042113 4693 flags.go:64] FLAG: --manifest-url-header="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042121 4693 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042127 4693 flags.go:64] FLAG: --max-open-files="1000000" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042134 4693 flags.go:64] FLAG: --max-pods="110" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042141 4693 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042147 4693 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042153 4693 flags.go:64] FLAG: --memory-manager-policy="None" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042159 4693 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042165 4693 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042171 4693 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042177 4693 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042190 4693 flags.go:64] FLAG: --node-status-max-images="50" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042196 4693 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042202 4693 flags.go:64] FLAG: --oom-score-adj="-999" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042209 4693 flags.go:64] FLAG: --pod-cidr="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042215 4693 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042223 4693 flags.go:64] FLAG: --pod-manifest-path="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042229 4693 flags.go:64] FLAG: --pod-max-pids="-1" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042235 4693 flags.go:64] FLAG: --pods-per-core="0" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042241 4693 flags.go:64] FLAG: --port="10250" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042247 4693 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042253 4693 flags.go:64] FLAG: --provider-id="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042259 4693 flags.go:64] FLAG: --qos-reserved="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042265 4693 flags.go:64] FLAG: --read-only-port="10255" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042275 4693 flags.go:64] FLAG: --register-node="true" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042297 4693 flags.go:64] FLAG: --register-schedulable="true" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042304 4693 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042314 4693 flags.go:64] FLAG: --registry-burst="10" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042320 4693 flags.go:64] FLAG: --registry-qps="5" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042326 4693 flags.go:64] FLAG: --reserved-cpus="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042332 4693 flags.go:64] FLAG: --reserved-memory="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042340 4693 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042346 4693 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042352 4693 flags.go:64] FLAG: --rotate-certificates="false" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042358 4693 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042364 4693 flags.go:64] FLAG: --runonce="false" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042370 4693 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042376 4693 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042387 4693 flags.go:64] FLAG: --seccomp-default="false" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042393 4693 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042400 4693 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042406 4693 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042413 4693 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042418 4693 flags.go:64] FLAG: --storage-driver-password="root" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042425 4693 flags.go:64] FLAG: --storage-driver-secure="false" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042431 4693 flags.go:64] FLAG: --storage-driver-table="stats" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042437 4693 flags.go:64] FLAG: --storage-driver-user="root" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042442 4693 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042449 4693 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042458 4693 flags.go:64] FLAG: --system-cgroups="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042464 4693 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042473 4693 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042479 4693 flags.go:64] FLAG: --tls-cert-file="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042485 4693 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042491 4693 flags.go:64] FLAG: --tls-min-version="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042497 4693 flags.go:64] FLAG: --tls-private-key-file="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042503 4693 flags.go:64] FLAG: --topology-manager-policy="none" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042510 4693 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042517 4693 flags.go:64] FLAG: --topology-manager-scope="container" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042523 4693 flags.go:64] FLAG: --v="2" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042531 4693 flags.go:64] FLAG: --version="false" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042538 4693 flags.go:64] FLAG: --vmodule="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042545 4693 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.042552 4693 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042695 4693 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042702 4693 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042708 4693 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042714 4693 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042720 4693 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042725 4693 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042730 4693 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042735 4693 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042741 4693 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042746 4693 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042752 4693 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042757 4693 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042763 4693 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042768 4693 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042773 4693 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042779 4693 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042784 4693 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042792 4693 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042798 4693 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042803 4693 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042808 4693 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042813 4693 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042819 4693 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042824 4693 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042831 4693 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042838 4693 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042844 4693 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042850 4693 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042857 4693 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042862 4693 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042868 4693 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042873 4693 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042878 4693 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042883 4693 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042889 4693 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042894 4693 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042899 4693 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042904 4693 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042909 4693 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042915 4693 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042920 4693 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042925 4693 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042930 4693 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042935 4693 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042940 4693 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042945 4693 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042951 4693 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042957 4693 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042962 4693 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042968 4693 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042973 4693 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042978 4693 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042983 4693 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042989 4693 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042994 4693 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.042999 4693 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.043006 4693 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.043012 4693 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.043017 4693 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.043022 4693 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.043027 4693 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.043032 4693 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.043037 4693 feature_gate.go:330] unrecognized feature gate: Example Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.043042 4693 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.043047 4693 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.043053 4693 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.043060 4693 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.043065 4693 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.043072 4693 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.043080 4693 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.043087 4693 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.043103 4693 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.051444 4693 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.051475 4693 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051552 4693 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051559 4693 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051564 4693 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051568 4693 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051573 4693 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051576 4693 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051580 4693 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051583 4693 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051587 4693 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051591 4693 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051594 4693 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051598 4693 feature_gate.go:330] unrecognized feature gate: Example Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051601 4693 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051605 4693 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051608 4693 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051612 4693 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051615 4693 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051619 4693 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051622 4693 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051626 4693 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051631 4693 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051637 4693 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051641 4693 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051645 4693 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051649 4693 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051652 4693 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051656 4693 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051661 4693 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051664 4693 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051668 4693 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051672 4693 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051677 4693 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051681 4693 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051685 4693 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051688 4693 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051692 4693 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051695 4693 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051699 4693 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051702 4693 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051706 4693 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051710 4693 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051715 4693 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051719 4693 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051722 4693 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051726 4693 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051730 4693 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051734 4693 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051738 4693 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051743 4693 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051746 4693 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051750 4693 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051753 4693 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051757 4693 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051760 4693 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051764 4693 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051767 4693 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051773 4693 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051776 4693 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051780 4693 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051783 4693 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051787 4693 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051790 4693 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051794 4693 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051797 4693 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051802 4693 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051807 4693 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051812 4693 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051816 4693 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051820 4693 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051823 4693 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051827 4693 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.051834 4693 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051964 4693 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051973 4693 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051977 4693 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051981 4693 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051985 4693 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051991 4693 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.051995 4693 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052000 4693 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052005 4693 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052009 4693 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052013 4693 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052017 4693 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052021 4693 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052024 4693 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052029 4693 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052034 4693 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052038 4693 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052042 4693 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052046 4693 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052051 4693 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052056 4693 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052060 4693 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052064 4693 feature_gate.go:330] unrecognized feature gate: Example Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052068 4693 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052072 4693 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052076 4693 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052080 4693 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052084 4693 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052087 4693 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052091 4693 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052095 4693 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052098 4693 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052102 4693 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052106 4693 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052110 4693 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052114 4693 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052119 4693 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052124 4693 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052128 4693 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052132 4693 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052135 4693 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052139 4693 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052142 4693 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052146 4693 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052149 4693 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052153 4693 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052156 4693 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052160 4693 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052163 4693 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052167 4693 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052170 4693 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052174 4693 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052177 4693 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052181 4693 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052185 4693 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052188 4693 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052192 4693 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052196 4693 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052200 4693 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052204 4693 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052207 4693 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052211 4693 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052214 4693 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052218 4693 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052221 4693 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052225 4693 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052229 4693 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052232 4693 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052236 4693 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052239 4693 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.052243 4693 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.052249 4693 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.052653 4693 server.go:940] "Client rotation is on, will bootstrap in background" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.056032 4693 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.056129 4693 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.056745 4693 server.go:997] "Starting client certificate rotation" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.056760 4693 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.056941 4693 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-16 08:43:19.733171754 +0000 UTC Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.057038 4693 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 88h57m6.676138631s for next certificate rotation Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.062046 4693 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.063735 4693 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.086126 4693 log.go:25] "Validated CRI v1 runtime API" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.113822 4693 log.go:25] "Validated CRI v1 image API" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.115836 4693 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.118382 4693 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-12-15-41-24-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.118415 4693 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.138127 4693 manager.go:217] Machine: {Timestamp:2025-12-12 15:46:13.136723554 +0000 UTC m=+0.305363175 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:7f31af20-0471-4822-ac00-478aed93de06 BootID:06cc8039-d4d0-428c-b1fb-d3ae486da4dd Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:f1:42:c7 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:f1:42:c7 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ce:08:92 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:62:86:76 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:99:27:3f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:3f:86:70 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:a6:6d:d5:55:5b:e5 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:f2:98:c9:70:f8:ac Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.138423 4693 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.138610 4693 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.139814 4693 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.140220 4693 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.140326 4693 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.140725 4693 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.140747 4693 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.141160 4693 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.141236 4693 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.141921 4693 state_mem.go:36] "Initialized new in-memory state store" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.142092 4693 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.155614 4693 kubelet.go:418] "Attempting to sync node with API server" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.155665 4693 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.155719 4693 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.155750 4693 kubelet.go:324] "Adding apiserver pod source" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.155775 4693 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.158769 4693 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.158828 4693 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 12 15:46:13 crc kubenswrapper[4693]: E1212 15:46:13.158897 4693 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.159388 4693 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 12 15:46:13 crc kubenswrapper[4693]: E1212 15:46:13.159441 4693 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.159542 4693 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.160255 4693 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.160958 4693 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.160995 4693 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.161004 4693 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.161013 4693 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.161029 4693 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.161039 4693 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.161048 4693 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.161061 4693 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.161071 4693 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.161079 4693 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.161093 4693 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.161103 4693 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.161666 4693 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.162175 4693 server.go:1280] "Started kubelet" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.163147 4693 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.163147 4693 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.163487 4693 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.163768 4693 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 15:46:13 crc systemd[1]: Started Kubernetes Kubelet. Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.165298 4693 server.go:460] "Adding debug handlers to kubelet server" Dec 12 15:46:13 crc kubenswrapper[4693]: E1212 15:46:13.193800 4693 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.204:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188082591bf78f9a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:46:13.162143642 +0000 UTC m=+0.330783243,LastTimestamp:2025-12-12 15:46:13.162143642 +0000 UTC m=+0.330783243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.195365 4693 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.195588 4693 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.195629 4693 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 04:21:03.538394463 +0000 UTC Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.195688 4693 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 732h34m50.342709446s for next certificate rotation Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.195759 4693 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.195774 4693 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.195751 4693 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 15:46:13 crc kubenswrapper[4693]: E1212 15:46:13.195933 4693 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.196874 4693 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 12 15:46:13 crc kubenswrapper[4693]: E1212 15:46:13.197162 4693 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Dec 12 15:46:13 crc kubenswrapper[4693]: E1212 15:46:13.197194 4693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="200ms" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.199059 4693 factory.go:153] Registering CRI-O factory Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.199418 4693 factory.go:221] Registration of the crio container factory successfully Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.199515 4693 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.199529 4693 factory.go:55] Registering systemd factory Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.199540 4693 factory.go:221] Registration of the systemd container factory successfully Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.199565 4693 factory.go:103] Registering Raw factory Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.199620 4693 manager.go:1196] Started watching for new ooms in manager Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.200401 4693 manager.go:319] Starting recovery of all containers Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.206746 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.206792 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.206804 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.206815 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.206826 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.206840 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.206852 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.206861 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.206872 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.206881 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.206890 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.206901 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.206910 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.206931 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.206942 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.206951 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.206962 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.206971 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.206980 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.206990 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207002 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207012 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207023 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207033 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207044 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207054 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207433 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207447 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207458 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207469 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207478 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207513 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207524 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207534 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207543 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207553 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207562 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207572 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207581 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207590 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207600 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207608 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207618 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207628 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207638 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207647 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207657 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207668 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207678 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207687 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207697 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207708 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207724 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207736 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207746 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207757 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207768 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207778 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207787 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207798 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207808 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207817 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207829 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207841 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207853 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207866 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207877 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207890 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207899 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207911 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207922 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207933 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207942 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207952 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207962 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207977 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.207989 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208002 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208016 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208027 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208037 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208046 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208055 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208066 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208079 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208092 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208104 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208115 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208126 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208137 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208150 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208166 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208178 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208190 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208203 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208216 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208228 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208241 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208384 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208405 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208417 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208429 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208442 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208461 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208482 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208497 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208512 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208536 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208554 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208572 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208591 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208606 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208618 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208631 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208642 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208653 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208664 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208675 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208686 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208697 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208716 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208729 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208741 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208753 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208765 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208777 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208788 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208799 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208811 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208823 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208835 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208847 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208905 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208919 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208930 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208941 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208951 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208962 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208975 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.208987 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209000 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209011 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209024 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209037 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209050 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209061 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209072 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209132 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209166 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209177 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209187 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209198 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209209 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209221 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209232 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209275 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209304 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209318 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209330 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209342 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209400 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209416 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209428 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209465 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209477 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209489 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209499 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209512 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209522 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209533 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209544 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209579 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209589 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209630 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209647 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209699 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209712 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209725 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209736 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209770 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209837 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209854 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.209926 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.210027 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.210098 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.210116 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.210880 4693 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.210907 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.210924 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.210938 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.210959 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.210988 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.211007 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.211020 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.211033 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.211046 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.211059 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.211105 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.211120 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.211160 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.211172 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.211183 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.211232 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.211251 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.211264 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.211308 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.211328 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.211340 4693 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.211350 4693 reconstruct.go:97] "Volume reconstruction finished" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.211366 4693 reconciler.go:26] "Reconciler: start to sync state" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.219317 4693 manager.go:324] Recovery completed Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.241312 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.246759 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.247084 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.247173 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.248601 4693 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.248637 4693 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.248664 4693 state_mem.go:36] "Initialized new in-memory state store" Dec 12 15:46:13 crc kubenswrapper[4693]: E1212 15:46:13.296479 4693 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.353860 4693 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.355721 4693 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.355773 4693 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.355804 4693 kubelet.go:2335] "Starting kubelet main sync loop" Dec 12 15:46:13 crc kubenswrapper[4693]: E1212 15:46:13.355861 4693 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 15:46:13 crc kubenswrapper[4693]: W1212 15:46:13.356890 4693 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 12 15:46:13 crc kubenswrapper[4693]: E1212 15:46:13.356971 4693 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.382628 4693 policy_none.go:49] "None policy: Start" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.383695 4693 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.383720 4693 state_mem.go:35] "Initializing new in-memory state store" Dec 12 15:46:13 crc kubenswrapper[4693]: E1212 15:46:13.397226 4693 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 15:46:13 crc kubenswrapper[4693]: E1212 15:46:13.398033 4693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="400ms" Dec 12 15:46:13 crc kubenswrapper[4693]: E1212 15:46:13.456642 4693 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.465507 4693 manager.go:334] "Starting Device Plugin manager" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.465629 4693 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.465646 4693 server.go:79] "Starting device plugin registration server" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.466049 4693 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.466065 4693 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.466435 4693 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.466627 4693 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.466639 4693 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 15:46:13 crc kubenswrapper[4693]: E1212 15:46:13.476100 4693 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.566170 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.567736 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.567776 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.567791 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.567824 4693 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 12 15:46:13 crc kubenswrapper[4693]: E1212 15:46:13.568306 4693 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.204:6443: connect: connection refused" node="crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.657759 4693 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.657985 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.659764 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.659815 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.659832 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.660024 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.660351 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.660418 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.662062 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.662119 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.662144 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.662618 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.662676 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.662702 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.662879 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.663234 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.663399 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.664178 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.664221 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.664261 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.664507 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.664694 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.664763 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.664858 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.664913 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.664936 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.665838 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.665893 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.665915 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.666071 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.666239 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.666466 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.666498 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.666538 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.666595 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.667158 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.667235 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.667260 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.667563 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.667627 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.668928 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.668975 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.668996 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.669581 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.669621 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.669632 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.717138 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.717215 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.717270 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.717379 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.717414 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.717445 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.717476 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.717505 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.717546 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.717605 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.717657 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.717699 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.717750 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.717819 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.717874 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.768838 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.770324 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.770359 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.770370 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.770398 4693 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 12 15:46:13 crc kubenswrapper[4693]: E1212 15:46:13.771263 4693 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.204:6443: connect: connection refused" node="crc" Dec 12 15:46:13 crc kubenswrapper[4693]: E1212 15:46:13.799679 4693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="800ms" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.819232 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.819311 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.819336 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.819340 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.819382 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.819679 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.819404 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.819770 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.819802 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.819309 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.819912 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.819839 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.820166 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.820203 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.820228 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.820252 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.820302 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.820329 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.820359 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.820383 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.820409 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.820471 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.820591 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.820685 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.820760 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.820768 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.820800 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.820844 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.820857 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.820891 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.990077 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 12 15:46:13 crc kubenswrapper[4693]: I1212 15:46:13.997020 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:46:14 crc kubenswrapper[4693]: I1212 15:46:14.014416 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:46:14 crc kubenswrapper[4693]: I1212 15:46:14.032229 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 15:46:14 crc kubenswrapper[4693]: W1212 15:46:14.032591 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-0e23bfdbd1fda23ca1fb57cd48ad8d1bdc47a62cc70a2fd7354d34d301797ca2 WatchSource:0}: Error finding container 0e23bfdbd1fda23ca1fb57cd48ad8d1bdc47a62cc70a2fd7354d34d301797ca2: Status 404 returned error can't find the container with id 0e23bfdbd1fda23ca1fb57cd48ad8d1bdc47a62cc70a2fd7354d34d301797ca2 Dec 12 15:46:14 crc kubenswrapper[4693]: W1212 15:46:14.035350 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-55cda6d02fbc6d14c32f705246b473c73f62e88959d7b250ad3bb59945a40216 WatchSource:0}: Error finding container 55cda6d02fbc6d14c32f705246b473c73f62e88959d7b250ad3bb59945a40216: Status 404 returned error can't find the container with id 55cda6d02fbc6d14c32f705246b473c73f62e88959d7b250ad3bb59945a40216 Dec 12 15:46:14 crc kubenswrapper[4693]: I1212 15:46:14.037831 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:46:14 crc kubenswrapper[4693]: W1212 15:46:14.041664 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-de5779bfb3bf6a615ff37ea039d1fccef222907a6a3ba60eed7f1ce3ee62814d WatchSource:0}: Error finding container de5779bfb3bf6a615ff37ea039d1fccef222907a6a3ba60eed7f1ce3ee62814d: Status 404 returned error can't find the container with id de5779bfb3bf6a615ff37ea039d1fccef222907a6a3ba60eed7f1ce3ee62814d Dec 12 15:46:14 crc kubenswrapper[4693]: W1212 15:46:14.050693 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7fc346fae5c139d8faf83079b861c005b1228374d2c81365db4cfe7ec07f45ec WatchSource:0}: Error finding container 7fc346fae5c139d8faf83079b861c005b1228374d2c81365db4cfe7ec07f45ec: Status 404 returned error can't find the container with id 7fc346fae5c139d8faf83079b861c005b1228374d2c81365db4cfe7ec07f45ec Dec 12 15:46:14 crc kubenswrapper[4693]: W1212 15:46:14.087304 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-401c44bbb41778337b937abec097bc087f96535f6bc386247350de32421e4bc4 WatchSource:0}: Error finding container 401c44bbb41778337b937abec097bc087f96535f6bc386247350de32421e4bc4: Status 404 returned error can't find the container with id 401c44bbb41778337b937abec097bc087f96535f6bc386247350de32421e4bc4 Dec 12 15:46:14 crc kubenswrapper[4693]: I1212 15:46:14.164493 4693 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 12 15:46:14 crc kubenswrapper[4693]: I1212 15:46:14.172111 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:14 crc kubenswrapper[4693]: I1212 15:46:14.173777 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:14 crc kubenswrapper[4693]: I1212 15:46:14.173852 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:14 crc kubenswrapper[4693]: I1212 15:46:14.173866 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:14 crc kubenswrapper[4693]: I1212 15:46:14.173891 4693 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 12 15:46:14 crc kubenswrapper[4693]: E1212 15:46:14.174349 4693 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.204:6443: connect: connection refused" node="crc" Dec 12 15:46:14 crc kubenswrapper[4693]: W1212 15:46:14.225772 4693 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 12 15:46:14 crc kubenswrapper[4693]: E1212 15:46:14.226375 4693 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Dec 12 15:46:14 crc kubenswrapper[4693]: I1212 15:46:14.360653 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"401c44bbb41778337b937abec097bc087f96535f6bc386247350de32421e4bc4"} Dec 12 15:46:14 crc kubenswrapper[4693]: W1212 15:46:14.361245 4693 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 12 15:46:14 crc kubenswrapper[4693]: E1212 15:46:14.361344 4693 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Dec 12 15:46:14 crc kubenswrapper[4693]: I1212 15:46:14.362117 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7fc346fae5c139d8faf83079b861c005b1228374d2c81365db4cfe7ec07f45ec"} Dec 12 15:46:14 crc kubenswrapper[4693]: I1212 15:46:14.362986 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"de5779bfb3bf6a615ff37ea039d1fccef222907a6a3ba60eed7f1ce3ee62814d"} Dec 12 15:46:14 crc kubenswrapper[4693]: I1212 15:46:14.364216 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0e23bfdbd1fda23ca1fb57cd48ad8d1bdc47a62cc70a2fd7354d34d301797ca2"} Dec 12 15:46:14 crc kubenswrapper[4693]: I1212 15:46:14.365464 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"55cda6d02fbc6d14c32f705246b473c73f62e88959d7b250ad3bb59945a40216"} Dec 12 15:46:14 crc kubenswrapper[4693]: W1212 15:46:14.479525 4693 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 12 15:46:14 crc kubenswrapper[4693]: E1212 15:46:14.479617 4693 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Dec 12 15:46:14 crc kubenswrapper[4693]: W1212 15:46:14.555862 4693 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 12 15:46:14 crc kubenswrapper[4693]: E1212 15:46:14.555927 4693 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Dec 12 15:46:14 crc kubenswrapper[4693]: E1212 15:46:14.601183 4693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="1.6s" Dec 12 15:46:14 crc kubenswrapper[4693]: I1212 15:46:14.975339 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:14 crc kubenswrapper[4693]: I1212 15:46:14.976993 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:14 crc kubenswrapper[4693]: I1212 15:46:14.977055 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:14 crc kubenswrapper[4693]: I1212 15:46:14.977069 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:14 crc kubenswrapper[4693]: I1212 15:46:14.977099 4693 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 12 15:46:14 crc kubenswrapper[4693]: E1212 15:46:14.977669 4693 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.204:6443: connect: connection refused" node="crc" Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.164177 4693 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.370635 4693 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="138206b8b174ebead583b6953999e7e3f8699191291ba8635a106d8ed56efbb5" exitCode=0 Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.370876 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"138206b8b174ebead583b6953999e7e3f8699191291ba8635a106d8ed56efbb5"} Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.370973 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.372172 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.372214 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.372224 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.373931 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc"} Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.375777 4693 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02" exitCode=0 Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.375854 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02"} Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.375897 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.377091 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.377117 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.377125 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.377997 4693 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="17b1b97779c4ee45d0ecc02bd4fddf2ca83c945878e2dff9464b4141686b35fa" exitCode=0 Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.378059 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"17b1b97779c4ee45d0ecc02bd4fddf2ca83c945878e2dff9464b4141686b35fa"} Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.378093 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.381472 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.381537 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.381555 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.382033 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.383869 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.383927 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.383946 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.385519 4693 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e" exitCode=0 Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.385570 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e"} Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.385611 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.387192 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.387253 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:15 crc kubenswrapper[4693]: I1212 15:46:15.387315 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:16 crc kubenswrapper[4693]: I1212 15:46:16.164020 4693 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 12 15:46:16 crc kubenswrapper[4693]: E1212 15:46:16.202582 4693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="3.2s" Dec 12 15:46:16 crc kubenswrapper[4693]: W1212 15:46:16.205340 4693 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 12 15:46:16 crc kubenswrapper[4693]: E1212 15:46:16.205415 4693 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Dec 12 15:46:16 crc kubenswrapper[4693]: I1212 15:46:16.391302 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb"} Dec 12 15:46:16 crc kubenswrapper[4693]: I1212 15:46:16.391361 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f"} Dec 12 15:46:16 crc kubenswrapper[4693]: I1212 15:46:16.393973 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784"} Dec 12 15:46:16 crc kubenswrapper[4693]: I1212 15:46:16.396828 4693 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="765f250f611d130fc50b8e55150a897a3883f81556a7ba929f6dadb35c352dc0" exitCode=0 Dec 12 15:46:16 crc kubenswrapper[4693]: I1212 15:46:16.396958 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"765f250f611d130fc50b8e55150a897a3883f81556a7ba929f6dadb35c352dc0"} Dec 12 15:46:16 crc kubenswrapper[4693]: I1212 15:46:16.396994 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:16 crc kubenswrapper[4693]: I1212 15:46:16.398028 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:16 crc kubenswrapper[4693]: I1212 15:46:16.398076 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:16 crc kubenswrapper[4693]: I1212 15:46:16.398100 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:16 crc kubenswrapper[4693]: I1212 15:46:16.399710 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9141897abf18bfa9aa4d537e0e117efd7eeb1137e4f4eb0aeb4d68ed07430ff1"} Dec 12 15:46:16 crc kubenswrapper[4693]: I1212 15:46:16.401173 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"23217ef6881b3e63efba7e3f80279f3a3a967f82adaaaee3ce1235a1164e2f9e"} Dec 12 15:46:16 crc kubenswrapper[4693]: I1212 15:46:16.401336 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:16 crc kubenswrapper[4693]: I1212 15:46:16.402830 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:16 crc kubenswrapper[4693]: I1212 15:46:16.402876 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:16 crc kubenswrapper[4693]: I1212 15:46:16.402896 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:16 crc kubenswrapper[4693]: I1212 15:46:16.578124 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:16 crc kubenswrapper[4693]: I1212 15:46:16.579329 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:16 crc kubenswrapper[4693]: I1212 15:46:16.579376 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:16 crc kubenswrapper[4693]: I1212 15:46:16.579391 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:16 crc kubenswrapper[4693]: I1212 15:46:16.579420 4693 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 12 15:46:16 crc kubenswrapper[4693]: E1212 15:46:16.579807 4693 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.204:6443: connect: connection refused" node="crc" Dec 12 15:46:16 crc kubenswrapper[4693]: W1212 15:46:16.702971 4693 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 12 15:46:16 crc kubenswrapper[4693]: E1212 15:46:16.703088 4693 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Dec 12 15:46:16 crc kubenswrapper[4693]: W1212 15:46:16.776766 4693 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 12 15:46:16 crc kubenswrapper[4693]: E1212 15:46:16.776836 4693 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Dec 12 15:46:16 crc kubenswrapper[4693]: W1212 15:46:16.805104 4693 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 12 15:46:16 crc kubenswrapper[4693]: E1212 15:46:16.805503 4693 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Dec 12 15:46:17 crc kubenswrapper[4693]: I1212 15:46:17.164243 4693 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 12 15:46:17 crc kubenswrapper[4693]: I1212 15:46:17.407720 4693 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="de183a390733f9a095b9f0ddb181c9e04a8092d555b74ffc3b3d91b48b3c3b10" exitCode=0 Dec 12 15:46:17 crc kubenswrapper[4693]: I1212 15:46:17.407848 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"de183a390733f9a095b9f0ddb181c9e04a8092d555b74ffc3b3d91b48b3c3b10"} Dec 12 15:46:17 crc kubenswrapper[4693]: I1212 15:46:17.407902 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:17 crc kubenswrapper[4693]: I1212 15:46:17.410339 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:17 crc kubenswrapper[4693]: I1212 15:46:17.410437 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:17 crc kubenswrapper[4693]: I1212 15:46:17.410459 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:17 crc kubenswrapper[4693]: I1212 15:46:17.411531 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"dc30784ce0860622be7856d80caddb1a7f8c510518a0d7dc647eba7bb3671c8a"} Dec 12 15:46:17 crc kubenswrapper[4693]: I1212 15:46:17.415346 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304"} Dec 12 15:46:17 crc kubenswrapper[4693]: I1212 15:46:17.415443 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:17 crc kubenswrapper[4693]: I1212 15:46:17.416613 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:17 crc kubenswrapper[4693]: I1212 15:46:17.416666 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:17 crc kubenswrapper[4693]: I1212 15:46:17.416689 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:17 crc kubenswrapper[4693]: I1212 15:46:17.419454 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270"} Dec 12 15:46:17 crc kubenswrapper[4693]: I1212 15:46:17.419531 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:17 crc kubenswrapper[4693]: I1212 15:46:17.420586 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:17 crc kubenswrapper[4693]: I1212 15:46:17.420635 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:17 crc kubenswrapper[4693]: I1212 15:46:17.420656 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:18 crc kubenswrapper[4693]: I1212 15:46:18.163898 4693 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 12 15:46:18 crc kubenswrapper[4693]: I1212 15:46:18.432721 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7"} Dec 12 15:46:18 crc kubenswrapper[4693]: I1212 15:46:18.434891 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4b8a94d6e3115a3afb2daec3d094b3b600e283e93c7f601999eebc5c5543db39"} Dec 12 15:46:18 crc kubenswrapper[4693]: I1212 15:46:18.437854 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:18 crc kubenswrapper[4693]: I1212 15:46:18.437890 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f36b5280d53c4c3a10ab04273c8f2c02d7118b49f7bcf33eaada7891585e396d"} Dec 12 15:46:18 crc kubenswrapper[4693]: I1212 15:46:18.437919 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:18 crc kubenswrapper[4693]: I1212 15:46:18.438672 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:18 crc kubenswrapper[4693]: I1212 15:46:18.438698 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:18 crc kubenswrapper[4693]: I1212 15:46:18.438707 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:18 crc kubenswrapper[4693]: I1212 15:46:18.439386 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:18 crc kubenswrapper[4693]: I1212 15:46:18.439411 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:18 crc kubenswrapper[4693]: I1212 15:46:18.439419 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:19 crc kubenswrapper[4693]: I1212 15:46:19.445246 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e"} Dec 12 15:46:19 crc kubenswrapper[4693]: I1212 15:46:19.445348 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a"} Dec 12 15:46:19 crc kubenswrapper[4693]: I1212 15:46:19.450186 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e63cb5d27ac7c233ff4d15cd75532081dd0a4da7c8cb027bf2d500952e0711e2"} Dec 12 15:46:19 crc kubenswrapper[4693]: I1212 15:46:19.450244 4693 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 15:46:19 crc kubenswrapper[4693]: I1212 15:46:19.450255 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bcec8f0c1c45bdf87fbd59304e0059ebc71ad896e88f3033611e2179259226e3"} Dec 12 15:46:19 crc kubenswrapper[4693]: I1212 15:46:19.450309 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:19 crc kubenswrapper[4693]: I1212 15:46:19.451167 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:19 crc kubenswrapper[4693]: I1212 15:46:19.451224 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:19 crc kubenswrapper[4693]: I1212 15:46:19.451242 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:19 crc kubenswrapper[4693]: I1212 15:46:19.779977 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:19 crc kubenswrapper[4693]: I1212 15:46:19.781376 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:19 crc kubenswrapper[4693]: I1212 15:46:19.781450 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:19 crc kubenswrapper[4693]: I1212 15:46:19.781474 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:19 crc kubenswrapper[4693]: I1212 15:46:19.781520 4693 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.362857 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.363100 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.364664 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.364722 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.364733 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.379024 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.456290 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7e74fc49bf4c47ad5e84f055d0a28da0a1a77c4aead41edab8df49991ff250fa"} Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.456349 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"71bf63aa4388b0b929872aed61fe7eb400fa636b9e479395331e3ed433b2ad79"} Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.456379 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.456412 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.456379 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.456511 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.457684 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.457713 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.457722 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.457696 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.457798 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.457808 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.457804 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.457840 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.457857 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.770560 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.812409 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.812684 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.814125 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.814155 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:20 crc kubenswrapper[4693]: I1212 15:46:20.814165 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:21 crc kubenswrapper[4693]: I1212 15:46:21.460480 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:21 crc kubenswrapper[4693]: I1212 15:46:21.460522 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:21 crc kubenswrapper[4693]: I1212 15:46:21.461986 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:21 crc kubenswrapper[4693]: I1212 15:46:21.462035 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:21 crc kubenswrapper[4693]: I1212 15:46:21.462053 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:21 crc kubenswrapper[4693]: I1212 15:46:21.462150 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:21 crc kubenswrapper[4693]: I1212 15:46:21.462179 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:21 crc kubenswrapper[4693]: I1212 15:46:21.462191 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:21 crc kubenswrapper[4693]: I1212 15:46:21.546566 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 12 15:46:21 crc kubenswrapper[4693]: I1212 15:46:21.802334 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:46:21 crc kubenswrapper[4693]: I1212 15:46:21.802610 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:21 crc kubenswrapper[4693]: I1212 15:46:21.802663 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:46:21 crc kubenswrapper[4693]: I1212 15:46:21.803945 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:21 crc kubenswrapper[4693]: I1212 15:46:21.803992 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:21 crc kubenswrapper[4693]: I1212 15:46:21.804008 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:22 crc kubenswrapper[4693]: I1212 15:46:22.463047 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:22 crc kubenswrapper[4693]: I1212 15:46:22.463148 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:22 crc kubenswrapper[4693]: I1212 15:46:22.463185 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:22 crc kubenswrapper[4693]: I1212 15:46:22.464358 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:22 crc kubenswrapper[4693]: I1212 15:46:22.464394 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:22 crc kubenswrapper[4693]: I1212 15:46:22.464404 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:22 crc kubenswrapper[4693]: I1212 15:46:22.464763 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:22 crc kubenswrapper[4693]: I1212 15:46:22.464797 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:22 crc kubenswrapper[4693]: I1212 15:46:22.464809 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:22 crc kubenswrapper[4693]: I1212 15:46:22.464838 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:22 crc kubenswrapper[4693]: I1212 15:46:22.464871 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:22 crc kubenswrapper[4693]: I1212 15:46:22.464887 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:23 crc kubenswrapper[4693]: I1212 15:46:23.105656 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:46:23 crc kubenswrapper[4693]: I1212 15:46:23.465199 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:23 crc kubenswrapper[4693]: I1212 15:46:23.466652 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:23 crc kubenswrapper[4693]: I1212 15:46:23.466693 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:23 crc kubenswrapper[4693]: I1212 15:46:23.466702 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:23 crc kubenswrapper[4693]: E1212 15:46:23.476488 4693 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:46:23 crc kubenswrapper[4693]: I1212 15:46:23.779142 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:46:23 crc kubenswrapper[4693]: I1212 15:46:23.779423 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:23 crc kubenswrapper[4693]: I1212 15:46:23.780617 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:23 crc kubenswrapper[4693]: I1212 15:46:23.780672 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:23 crc kubenswrapper[4693]: I1212 15:46:23.780682 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:24 crc kubenswrapper[4693]: I1212 15:46:24.481178 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:46:24 crc kubenswrapper[4693]: I1212 15:46:24.481296 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:24 crc kubenswrapper[4693]: I1212 15:46:24.482186 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:24 crc kubenswrapper[4693]: I1212 15:46:24.482245 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:24 crc kubenswrapper[4693]: I1212 15:46:24.482262 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:26 crc kubenswrapper[4693]: I1212 15:46:26.779381 4693 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 15:46:26 crc kubenswrapper[4693]: I1212 15:46:26.779500 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 15:46:29 crc kubenswrapper[4693]: I1212 15:46:29.164641 4693 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 12 15:46:29 crc kubenswrapper[4693]: E1212 15:46:29.403938 4693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Dec 12 15:46:29 crc kubenswrapper[4693]: I1212 15:46:29.564357 4693 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 15:46:29 crc kubenswrapper[4693]: I1212 15:46:29.564424 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 12 15:46:29 crc kubenswrapper[4693]: I1212 15:46:29.570970 4693 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 15:46:29 crc kubenswrapper[4693]: I1212 15:46:29.571046 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 12 15:46:29 crc kubenswrapper[4693]: I1212 15:46:29.879246 4693 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 12 15:46:29 crc kubenswrapper[4693]: I1212 15:46:29.879373 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 12 15:46:30 crc kubenswrapper[4693]: I1212 15:46:30.197236 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 12 15:46:30 crc kubenswrapper[4693]: I1212 15:46:30.197420 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:30 crc kubenswrapper[4693]: I1212 15:46:30.198580 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:30 crc kubenswrapper[4693]: I1212 15:46:30.198621 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:30 crc kubenswrapper[4693]: I1212 15:46:30.198635 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:30 crc kubenswrapper[4693]: I1212 15:46:30.757238 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 12 15:46:30 crc kubenswrapper[4693]: I1212 15:46:30.757429 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:30 crc kubenswrapper[4693]: I1212 15:46:30.758529 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:30 crc kubenswrapper[4693]: I1212 15:46:30.758559 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:30 crc kubenswrapper[4693]: I1212 15:46:30.758570 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:30 crc kubenswrapper[4693]: I1212 15:46:30.769589 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 12 15:46:30 crc kubenswrapper[4693]: I1212 15:46:30.925500 4693 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 12 15:46:30 crc kubenswrapper[4693]: I1212 15:46:30.925584 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 12 15:46:31 crc kubenswrapper[4693]: I1212 15:46:31.483435 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:31 crc kubenswrapper[4693]: I1212 15:46:31.484619 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:31 crc kubenswrapper[4693]: I1212 15:46:31.484678 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:31 crc kubenswrapper[4693]: I1212 15:46:31.484688 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:31 crc kubenswrapper[4693]: I1212 15:46:31.807226 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:46:31 crc kubenswrapper[4693]: I1212 15:46:31.807535 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:31 crc kubenswrapper[4693]: I1212 15:46:31.808019 4693 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 12 15:46:31 crc kubenswrapper[4693]: I1212 15:46:31.808106 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 12 15:46:31 crc kubenswrapper[4693]: I1212 15:46:31.809041 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:31 crc kubenswrapper[4693]: I1212 15:46:31.809123 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:31 crc kubenswrapper[4693]: I1212 15:46:31.809147 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:31 crc kubenswrapper[4693]: I1212 15:46:31.813778 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:46:32 crc kubenswrapper[4693]: I1212 15:46:32.485776 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:32 crc kubenswrapper[4693]: I1212 15:46:32.486150 4693 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 12 15:46:32 crc kubenswrapper[4693]: I1212 15:46:32.486195 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 12 15:46:32 crc kubenswrapper[4693]: I1212 15:46:32.486680 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:32 crc kubenswrapper[4693]: I1212 15:46:32.486716 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:32 crc kubenswrapper[4693]: I1212 15:46:32.486736 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:33 crc kubenswrapper[4693]: E1212 15:46:33.476609 4693 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 15:46:34 crc kubenswrapper[4693]: E1212 15:46:34.555654 4693 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 12 15:46:34 crc kubenswrapper[4693]: I1212 15:46:34.558253 4693 trace.go:236] Trace[50502234]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 15:46:21.149) (total time: 13408ms): Dec 12 15:46:34 crc kubenswrapper[4693]: Trace[50502234]: ---"Objects listed" error: 13408ms (15:46:34.558) Dec 12 15:46:34 crc kubenswrapper[4693]: Trace[50502234]: [13.408221754s] [13.408221754s] END Dec 12 15:46:34 crc kubenswrapper[4693]: I1212 15:46:34.558339 4693 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 12 15:46:34 crc kubenswrapper[4693]: I1212 15:46:34.558791 4693 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 12 15:46:34 crc kubenswrapper[4693]: I1212 15:46:34.558920 4693 trace.go:236] Trace[804919933]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 15:46:20.680) (total time: 13878ms): Dec 12 15:46:34 crc kubenswrapper[4693]: Trace[804919933]: ---"Objects listed" error: 13878ms (15:46:34.558) Dec 12 15:46:34 crc kubenswrapper[4693]: Trace[804919933]: [13.878673576s] [13.878673576s] END Dec 12 15:46:34 crc kubenswrapper[4693]: I1212 15:46:34.558979 4693 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 12 15:46:34 crc kubenswrapper[4693]: I1212 15:46:34.559057 4693 trace.go:236] Trace[631475317]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 15:46:21.084) (total time: 13474ms): Dec 12 15:46:34 crc kubenswrapper[4693]: Trace[631475317]: ---"Objects listed" error: 13474ms (15:46:34.558) Dec 12 15:46:34 crc kubenswrapper[4693]: Trace[631475317]: [13.474135644s] [13.474135644s] END Dec 12 15:46:34 crc kubenswrapper[4693]: I1212 15:46:34.559085 4693 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 12 15:46:34 crc kubenswrapper[4693]: I1212 15:46:34.559906 4693 trace.go:236] Trace[393807123]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 15:46:21.913) (total time: 12646ms): Dec 12 15:46:34 crc kubenswrapper[4693]: Trace[393807123]: ---"Objects listed" error: 12646ms (15:46:34.559) Dec 12 15:46:34 crc kubenswrapper[4693]: Trace[393807123]: [12.646816046s] [12.646816046s] END Dec 12 15:46:34 crc kubenswrapper[4693]: I1212 15:46:34.559934 4693 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.168730 4693 apiserver.go:52] "Watching apiserver" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.170781 4693 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.171113 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.171525 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.171697 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.171749 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.171925 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.172243 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.172246 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.172373 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.172595 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.172636 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.174036 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.175022 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.176130 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.176228 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.176289 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.176427 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.176986 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.178126 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.181516 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.196617 4693 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.200210 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.211952 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.221355 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.231966 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.241537 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.252628 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.262542 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.262798 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.262879 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.262954 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.263026 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.263090 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.263156 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.263219 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.263304 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.263377 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.263476 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.263545 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.263608 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.263712 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.263797 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.263077 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.263370 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.263640 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.263854 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.263866 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.263881 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264183 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264242 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264211 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264355 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264402 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264437 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264461 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264485 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264686 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264710 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264734 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264735 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264762 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264787 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264809 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264830 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264838 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264852 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264875 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264909 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264921 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264944 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264952 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264964 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264995 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.264993 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265017 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265043 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265056 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265130 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265158 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265182 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265205 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265229 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265291 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265318 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265340 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265363 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265385 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265408 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265435 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265129 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265459 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265480 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265536 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265559 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265580 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265602 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265625 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265646 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265669 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265696 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265717 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265742 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265764 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265820 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265843 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265865 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265889 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265166 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265312 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265321 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265369 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265431 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265495 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265512 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265629 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265689 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265701 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265723 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265827 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265826 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265874 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.266741 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265898 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.265912 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:46:35.765894548 +0000 UTC m=+22.934534149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.266797 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.266803 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.266830 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.266858 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.266885 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.266911 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.266935 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.266960 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.266959 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.266985 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.267012 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.267037 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.267060 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.267082 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.267103 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.267126 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.267136 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.267148 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.267172 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.267194 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.267216 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.267237 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.267258 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.267297 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.267864 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.267899 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.267924 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.267949 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.267972 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.267995 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268027 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268053 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268078 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268100 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268110 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268123 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268186 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268225 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268251 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268293 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268317 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268340 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268362 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268383 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268396 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268405 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268447 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268468 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268488 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268503 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268518 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268534 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268549 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268565 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268580 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268595 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268611 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268652 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268666 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268684 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268698 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268715 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268730 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268746 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268762 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268781 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268798 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268814 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268829 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268845 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268861 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268877 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268895 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268912 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268928 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268957 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268975 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.268990 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269005 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269022 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269038 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269055 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269075 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269091 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269109 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269125 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269140 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269156 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269171 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269188 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269209 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269240 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269315 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269338 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269359 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269375 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269390 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269411 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269427 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269443 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269459 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269489 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269505 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269522 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269549 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269566 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269583 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269599 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269615 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269632 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269649 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269667 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269682 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269699 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269714 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269729 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269746 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269761 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269776 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269791 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269807 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269826 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269843 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269865 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269882 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269899 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269914 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269932 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269948 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269964 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269980 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.269997 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.270013 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.270029 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.270045 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.270061 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.270098 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.270120 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.270140 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.270160 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.270179 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.270198 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.270218 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.270238 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.270254 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.270302 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.270324 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.270343 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.270361 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.270387 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.270533 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.270530 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265988 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.266064 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.266084 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.266084 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.266110 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.266195 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.266202 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.266215 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.266214 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.266634 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.266642 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.270806 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.270850 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.265905 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.271068 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.271110 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.271250 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.271246 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.271329 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.271394 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.271523 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.271665 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.271680 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.271705 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.272384 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.273327 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.273662 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.273747 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.274116 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.274203 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.274213 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.274396 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.274577 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.274769 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.274835 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.274846 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.274889 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.274900 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.275084 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.275211 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.275509 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.275706 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.275743 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.275869 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.275888 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.276121 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.276656 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.276747 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.276753 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.276805 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.277003 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.277040 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.277123 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.277312 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.276843 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.276887 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.276169 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.277636 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.277736 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.277919 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.277953 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.278500 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.278917 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.279032 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.279028 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.279068 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.279374 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.279490 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.279428 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.279573 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.279621 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.279703 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.279787 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.279872 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.280060 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.280183 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.280232 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.280383 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.280597 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.280677 4693 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.280749 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.280977 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.281044 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.281454 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.281775 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.282195 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.282345 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.282584 4693 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.282678 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:35.782654833 +0000 UTC m=+22.951294504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.282910 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.282925 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.283248 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.283488 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.283621 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.283865 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.284117 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.284317 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.285220 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.285251 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.285259 4693 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.286006 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.286223 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.288923 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.289196 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.289355 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.289366 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:35.789336766 +0000 UTC m=+22.957976377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.289573 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.289735 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.290065 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.289421 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.291004 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.291215 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.291592 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.291821 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.292093 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.292340 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.292428 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.292478 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.292558 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.292961 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.292988 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.293243 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.293386 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.293469 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.293907 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.294466 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298295 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298752 4693 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298790 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298803 4693 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298815 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298824 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298834 4693 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298842 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298851 4693 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298860 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298869 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298877 4693 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298887 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298896 4693 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298905 4693 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298913 4693 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298921 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298930 4693 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298939 4693 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298948 4693 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298959 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298968 4693 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298978 4693 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298987 4693 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.298995 4693 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.299004 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.299013 4693 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.299021 4693 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.299030 4693 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.299039 4693 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.299048 4693 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.299057 4693 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.299067 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.299075 4693 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.300757 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.300873 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.300953 4693 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.301098 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:35.80107336 +0000 UTC m=+22.969713051 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.300757 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.301320 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.301401 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.302530 4693 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.302672 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:35.802655141 +0000 UTC m=+22.971294812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.303232 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.304607 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.304819 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.305574 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.305806 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.306783 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.308861 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.310112 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.310201 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.310235 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.310193 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.310357 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.310627 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.311170 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.311982 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.312187 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.313138 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.318125 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.320158 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.320160 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.320239 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.320297 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.320589 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.320725 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.320726 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.320830 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.320767 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.320919 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.320921 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.320967 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.321105 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.321187 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.322800 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.322916 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.323159 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.323265 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.323556 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.323624 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.328851 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.337305 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.340566 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.345389 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.366896 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.367491 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.368740 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.371135 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.372634 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.373540 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.374096 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.375124 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.375886 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.377012 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.377755 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.379103 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.379743 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.380485 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.381708 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.382338 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.383597 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.384110 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.385262 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.386562 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.387146 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.388397 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.388818 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.389819 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.390414 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.391219 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.392800 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.393411 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.394541 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.395067 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.396312 4693 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.396436 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.398317 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.399379 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.399825 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.399891 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.399915 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.399989 4693 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400009 4693 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400022 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400034 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400047 4693 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400060 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400074 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400085 4693 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400097 4693 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400108 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400119 4693 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400131 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400143 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400155 4693 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400166 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400178 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400190 4693 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400201 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400212 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400223 4693 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400233 4693 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400244 4693 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400257 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400268 4693 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400309 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400320 4693 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400331 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400343 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400355 4693 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400366 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400377 4693 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400390 4693 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400402 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400413 4693 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400424 4693 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400435 4693 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400447 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400458 4693 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400469 4693 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400479 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400488 4693 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400499 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400508 4693 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400519 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400529 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400539 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400549 4693 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400559 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400569 4693 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400581 4693 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400592 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400604 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400616 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400626 4693 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400637 4693 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400647 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400658 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400668 4693 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400678 4693 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400690 4693 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400701 4693 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400713 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400724 4693 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400734 4693 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400744 4693 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400755 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400765 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400775 4693 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400785 4693 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400795 4693 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400806 4693 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400816 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400827 4693 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400837 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400848 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400859 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400871 4693 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400882 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400894 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400905 4693 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400917 4693 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400929 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400939 4693 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400952 4693 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400963 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400975 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400987 4693 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.400998 4693 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401012 4693 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401032 4693 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401044 4693 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401055 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401067 4693 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401080 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401090 4693 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401102 4693 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401113 4693 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401124 4693 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401137 4693 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401147 4693 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401158 4693 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401168 4693 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401179 4693 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401190 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401200 4693 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401210 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401220 4693 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401230 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401241 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401253 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401263 4693 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401296 4693 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401309 4693 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401320 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401332 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401343 4693 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401354 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401364 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401375 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401385 4693 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401395 4693 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401410 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401421 4693 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401431 4693 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401443 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401454 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401466 4693 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401477 4693 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401489 4693 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401501 4693 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401513 4693 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401524 4693 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401535 4693 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401545 4693 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401556 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401567 4693 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401577 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401587 4693 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401597 4693 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401607 4693 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401618 4693 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401694 4693 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401875 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401926 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.401950 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.402242 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.402257 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.402288 4693 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.402301 4693 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.402312 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.402321 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.402329 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.402337 4693 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.402345 4693 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.402354 4693 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.402362 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.402370 4693 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.402378 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.402386 4693 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.402394 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.402402 4693 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.402442 4693 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.402456 4693 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.402465 4693 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.402647 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.403741 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.404635 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.405842 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.406458 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.407797 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.408600 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.409818 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.410393 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.411333 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.411854 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.412979 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.413435 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.414230 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.414763 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.415661 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.416210 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.416945 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.417841 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.423069 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.431076 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.441836 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.458452 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.470743 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.473810 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.484512 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.493665 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.493883 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.494773 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.497074 4693 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e" exitCode=255 Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.497155 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e"} Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.501466 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.505812 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.507833 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 15:46:35 crc kubenswrapper[4693]: W1212 15:46:35.513438 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-b262faf0050c5f33e9894827e13954f02e57a0f24b53bffd92d72814b4258a54 WatchSource:0}: Error finding container b262faf0050c5f33e9894827e13954f02e57a0f24b53bffd92d72814b4258a54: Status 404 returned error can't find the container with id b262faf0050c5f33e9894827e13954f02e57a0f24b53bffd92d72814b4258a54 Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.517410 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: W1212 15:46:35.522123 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-4cade8ece2a56b2872462a1ea9c0314258c797686c37ca824b8838fe2e47ac45 WatchSource:0}: Error finding container 4cade8ece2a56b2872462a1ea9c0314258c797686c37ca824b8838fe2e47ac45: Status 404 returned error can't find the container with id 4cade8ece2a56b2872462a1ea9c0314258c797686c37ca824b8838fe2e47ac45 Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.530147 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.539856 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.550832 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.559451 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.569175 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.573792 4693 scope.go:117] "RemoveContainer" containerID="ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.575402 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.589674 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.600912 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.611497 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.621251 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.632258 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.645441 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.654943 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.665605 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.804957 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.805047 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.805086 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.805118 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.805153 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:46:36.805119083 +0000 UTC m=+23.973758684 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:46:35 crc kubenswrapper[4693]: I1212 15:46:35.805210 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.805247 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.805287 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.805288 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.805319 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.805332 4693 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.805340 4693 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.805288 4693 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.805303 4693 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.805383 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:36.80536531 +0000 UTC m=+23.974004911 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.805493 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:36.805473452 +0000 UTC m=+23.974113053 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.805519 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:36.805510213 +0000 UTC m=+23.974149954 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:46:35 crc kubenswrapper[4693]: E1212 15:46:35.805542 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:36.805533714 +0000 UTC m=+23.974173445 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.090645 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-nth2b"] Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.090977 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nth2b" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.093075 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.093203 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.093233 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.105816 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.107737 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dxrb\" (UniqueName: \"kubernetes.io/projected/20c9fcf7-c537-47fe-9699-bc3d411dd964-kube-api-access-2dxrb\") pod \"node-resolver-nth2b\" (UID: \"20c9fcf7-c537-47fe-9699-bc3d411dd964\") " pod="openshift-dns/node-resolver-nth2b" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.107782 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/20c9fcf7-c537-47fe-9699-bc3d411dd964-hosts-file\") pod \"node-resolver-nth2b\" (UID: \"20c9fcf7-c537-47fe-9699-bc3d411dd964\") " pod="openshift-dns/node-resolver-nth2b" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.118361 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.132793 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.142700 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.155189 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.167582 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.178970 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.193657 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.206379 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.208624 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/20c9fcf7-c537-47fe-9699-bc3d411dd964-hosts-file\") pod \"node-resolver-nth2b\" (UID: \"20c9fcf7-c537-47fe-9699-bc3d411dd964\") " pod="openshift-dns/node-resolver-nth2b" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.208711 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxrb\" (UniqueName: \"kubernetes.io/projected/20c9fcf7-c537-47fe-9699-bc3d411dd964-kube-api-access-2dxrb\") pod \"node-resolver-nth2b\" (UID: \"20c9fcf7-c537-47fe-9699-bc3d411dd964\") " pod="openshift-dns/node-resolver-nth2b" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.208817 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/20c9fcf7-c537-47fe-9699-bc3d411dd964-hosts-file\") pod \"node-resolver-nth2b\" (UID: \"20c9fcf7-c537-47fe-9699-bc3d411dd964\") " pod="openshift-dns/node-resolver-nth2b" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.227904 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dxrb\" (UniqueName: \"kubernetes.io/projected/20c9fcf7-c537-47fe-9699-bc3d411dd964-kube-api-access-2dxrb\") pod \"node-resolver-nth2b\" (UID: \"20c9fcf7-c537-47fe-9699-bc3d411dd964\") " pod="openshift-dns/node-resolver-nth2b" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.356042 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.356106 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:46:36 crc kubenswrapper[4693]: E1212 15:46:36.356188 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:46:36 crc kubenswrapper[4693]: E1212 15:46:36.356305 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.402019 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nth2b" Dec 12 15:46:36 crc kubenswrapper[4693]: W1212 15:46:36.413680 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c9fcf7_c537_47fe_9699_bc3d411dd964.slice/crio-b3d0b5f976be5e770bd99945be154505c5f95a148822b17c3021e0ecc9ed8045 WatchSource:0}: Error finding container b3d0b5f976be5e770bd99945be154505c5f95a148822b17c3021e0ecc9ed8045: Status 404 returned error can't find the container with id b3d0b5f976be5e770bd99945be154505c5f95a148822b17c3021e0ecc9ed8045 Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.475464 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-wvw2c"] Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.476436 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.478286 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-gvtgv"] Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.479832 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.480150 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-sllz5"] Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.480193 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.480702 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.480731 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.480810 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.481369 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.482127 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.485315 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.485484 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.485516 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.485622 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.487911 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.488638 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.488808 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.504660 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.512682 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-os-release\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.512718 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e54028d7-cdbb-4fa9-92cd-9570edacb888-multus-daemon-config\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.512734 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c6efc9d0-9c03-4235-ab59-96263c372e09-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gvtgv\" (UID: \"c6efc9d0-9c03-4235-ab59-96263c372e09\") " pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.512754 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk9xs\" (UniqueName: \"kubernetes.io/projected/e54028d7-cdbb-4fa9-92cd-9570edacb888-kube-api-access-zk9xs\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.512768 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71d6bb6b-1211-4bbd-8946-2010438d6a5d-mcd-auth-proxy-config\") pod \"machine-config-daemon-wvw2c\" (UID: \"71d6bb6b-1211-4bbd-8946-2010438d6a5d\") " pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.512782 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-host-var-lib-cni-multus\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.512797 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c6efc9d0-9c03-4235-ab59-96263c372e09-system-cni-dir\") pod \"multus-additional-cni-plugins-gvtgv\" (UID: \"c6efc9d0-9c03-4235-ab59-96263c372e09\") " pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.512810 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-system-cni-dir\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.512823 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-cnibin\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.512835 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e54028d7-cdbb-4fa9-92cd-9570edacb888-cni-binary-copy\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.512849 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-host-run-netns\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.512891 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/71d6bb6b-1211-4bbd-8946-2010438d6a5d-rootfs\") pod \"machine-config-daemon-wvw2c\" (UID: \"71d6bb6b-1211-4bbd-8946-2010438d6a5d\") " pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.512905 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-hostroot\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.512921 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71d6bb6b-1211-4bbd-8946-2010438d6a5d-proxy-tls\") pod \"machine-config-daemon-wvw2c\" (UID: \"71d6bb6b-1211-4bbd-8946-2010438d6a5d\") " pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.512943 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-multus-conf-dir\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.512958 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-host-run-multus-certs\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.512972 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c6efc9d0-9c03-4235-ab59-96263c372e09-os-release\") pod \"multus-additional-cni-plugins-gvtgv\" (UID: \"c6efc9d0-9c03-4235-ab59-96263c372e09\") " pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.512989 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h672\" (UniqueName: \"kubernetes.io/projected/c6efc9d0-9c03-4235-ab59-96263c372e09-kube-api-access-8h672\") pod \"multus-additional-cni-plugins-gvtgv\" (UID: \"c6efc9d0-9c03-4235-ab59-96263c372e09\") " pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.513005 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c6efc9d0-9c03-4235-ab59-96263c372e09-cnibin\") pod \"multus-additional-cni-plugins-gvtgv\" (UID: \"c6efc9d0-9c03-4235-ab59-96263c372e09\") " pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.513029 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-multus-cni-dir\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.513045 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-host-var-lib-kubelet\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.513063 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c6efc9d0-9c03-4235-ab59-96263c372e09-cni-binary-copy\") pod \"multus-additional-cni-plugins-gvtgv\" (UID: \"c6efc9d0-9c03-4235-ab59-96263c372e09\") " pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.513089 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-multus-socket-dir-parent\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.513108 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-host-run-k8s-cni-cncf-io\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.513125 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-etc-kubernetes\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.513139 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh2lt\" (UniqueName: \"kubernetes.io/projected/71d6bb6b-1211-4bbd-8946-2010438d6a5d-kube-api-access-zh2lt\") pod \"machine-config-daemon-wvw2c\" (UID: \"71d6bb6b-1211-4bbd-8946-2010438d6a5d\") " pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.513154 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c6efc9d0-9c03-4235-ab59-96263c372e09-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gvtgv\" (UID: \"c6efc9d0-9c03-4235-ab59-96263c372e09\") " pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.513168 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-host-var-lib-cni-bin\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.515424 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4cade8ece2a56b2872462a1ea9c0314258c797686c37ca824b8838fe2e47ac45"} Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.527871 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.529758 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05"} Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.529808 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642"} Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.529819 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b262faf0050c5f33e9894827e13954f02e57a0f24b53bffd92d72814b4258a54"} Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.533029 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051"} Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.533067 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6ae599c861d69fe833503f4525cddb726582ccec59cbb08f1d7be5397c05f0b8"} Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.538343 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.540741 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.542957 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe"} Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.543098 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.545482 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nth2b" event={"ID":"20c9fcf7-c537-47fe-9699-bc3d411dd964","Type":"ContainerStarted","Data":"b3d0b5f976be5e770bd99945be154505c5f95a148822b17c3021e0ecc9ed8045"} Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.552253 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.562832 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.571836 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.585822 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.598724 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614166 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh2lt\" (UniqueName: \"kubernetes.io/projected/71d6bb6b-1211-4bbd-8946-2010438d6a5d-kube-api-access-zh2lt\") pod \"machine-config-daemon-wvw2c\" (UID: \"71d6bb6b-1211-4bbd-8946-2010438d6a5d\") " pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614200 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c6efc9d0-9c03-4235-ab59-96263c372e09-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gvtgv\" (UID: \"c6efc9d0-9c03-4235-ab59-96263c372e09\") " pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614236 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-host-var-lib-cni-bin\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614253 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-etc-kubernetes\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614290 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-os-release\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614306 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e54028d7-cdbb-4fa9-92cd-9570edacb888-multus-daemon-config\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614322 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c6efc9d0-9c03-4235-ab59-96263c372e09-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gvtgv\" (UID: \"c6efc9d0-9c03-4235-ab59-96263c372e09\") " pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614344 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71d6bb6b-1211-4bbd-8946-2010438d6a5d-mcd-auth-proxy-config\") pod \"machine-config-daemon-wvw2c\" (UID: \"71d6bb6b-1211-4bbd-8946-2010438d6a5d\") " pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614366 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk9xs\" (UniqueName: \"kubernetes.io/projected/e54028d7-cdbb-4fa9-92cd-9570edacb888-kube-api-access-zk9xs\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614383 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c6efc9d0-9c03-4235-ab59-96263c372e09-system-cni-dir\") pod \"multus-additional-cni-plugins-gvtgv\" (UID: \"c6efc9d0-9c03-4235-ab59-96263c372e09\") " pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614398 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-system-cni-dir\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614416 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-host-var-lib-cni-multus\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614434 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-cnibin\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614423 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-etc-kubernetes\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614461 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-host-var-lib-cni-bin\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614449 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-host-run-netns\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614503 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c6efc9d0-9c03-4235-ab59-96263c372e09-system-cni-dir\") pod \"multus-additional-cni-plugins-gvtgv\" (UID: \"c6efc9d0-9c03-4235-ab59-96263c372e09\") " pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614485 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-host-run-netns\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614539 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/71d6bb6b-1211-4bbd-8946-2010438d6a5d-rootfs\") pod \"machine-config-daemon-wvw2c\" (UID: \"71d6bb6b-1211-4bbd-8946-2010438d6a5d\") " pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614557 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-system-cni-dir\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614581 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e54028d7-cdbb-4fa9-92cd-9570edacb888-cni-binary-copy\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614608 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-cnibin\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614610 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71d6bb6b-1211-4bbd-8946-2010438d6a5d-proxy-tls\") pod \"machine-config-daemon-wvw2c\" (UID: \"71d6bb6b-1211-4bbd-8946-2010438d6a5d\") " pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614658 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-hostroot\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614721 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-multus-conf-dir\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.615073 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c6efc9d0-9c03-4235-ab59-96263c372e09-os-release\") pod \"multus-additional-cni-plugins-gvtgv\" (UID: \"c6efc9d0-9c03-4235-ab59-96263c372e09\") " pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.615100 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-host-run-multus-certs\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614753 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/71d6bb6b-1211-4bbd-8946-2010438d6a5d-rootfs\") pod \"machine-config-daemon-wvw2c\" (UID: \"71d6bb6b-1211-4bbd-8946-2010438d6a5d\") " pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.615139 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c6efc9d0-9c03-4235-ab59-96263c372e09-os-release\") pod \"multus-additional-cni-plugins-gvtgv\" (UID: \"c6efc9d0-9c03-4235-ab59-96263c372e09\") " pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614962 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e54028d7-cdbb-4fa9-92cd-9570edacb888-multus-daemon-config\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.615169 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e54028d7-cdbb-4fa9-92cd-9570edacb888-cni-binary-copy\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.615184 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h672\" (UniqueName: \"kubernetes.io/projected/c6efc9d0-9c03-4235-ab59-96263c372e09-kube-api-access-8h672\") pod \"multus-additional-cni-plugins-gvtgv\" (UID: \"c6efc9d0-9c03-4235-ab59-96263c372e09\") " pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614889 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-hostroot\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614583 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-host-var-lib-cni-multus\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.615219 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c6efc9d0-9c03-4235-ab59-96263c372e09-cnibin\") pod \"multus-additional-cni-plugins-gvtgv\" (UID: \"c6efc9d0-9c03-4235-ab59-96263c372e09\") " pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614992 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-multus-conf-dir\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.615125 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71d6bb6b-1211-4bbd-8946-2010438d6a5d-mcd-auth-proxy-config\") pod \"machine-config-daemon-wvw2c\" (UID: \"71d6bb6b-1211-4bbd-8946-2010438d6a5d\") " pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.614728 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-os-release\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.615297 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c6efc9d0-9c03-4235-ab59-96263c372e09-cnibin\") pod \"multus-additional-cni-plugins-gvtgv\" (UID: \"c6efc9d0-9c03-4235-ab59-96263c372e09\") " pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.615150 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-host-run-multus-certs\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.615313 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-multus-cni-dir\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.615350 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-host-var-lib-kubelet\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.615404 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c6efc9d0-9c03-4235-ab59-96263c372e09-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gvtgv\" (UID: \"c6efc9d0-9c03-4235-ab59-96263c372e09\") " pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.615408 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c6efc9d0-9c03-4235-ab59-96263c372e09-cni-binary-copy\") pod \"multus-additional-cni-plugins-gvtgv\" (UID: \"c6efc9d0-9c03-4235-ab59-96263c372e09\") " pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.615455 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-host-var-lib-kubelet\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.615482 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-host-run-k8s-cni-cncf-io\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.615503 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-multus-socket-dir-parent\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.615532 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-multus-cni-dir\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.615556 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-multus-socket-dir-parent\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.615561 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e54028d7-cdbb-4fa9-92cd-9570edacb888-host-run-k8s-cni-cncf-io\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.615746 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.616006 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c6efc9d0-9c03-4235-ab59-96263c372e09-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gvtgv\" (UID: \"c6efc9d0-9c03-4235-ab59-96263c372e09\") " pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.616307 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c6efc9d0-9c03-4235-ab59-96263c372e09-cni-binary-copy\") pod \"multus-additional-cni-plugins-gvtgv\" (UID: \"c6efc9d0-9c03-4235-ab59-96263c372e09\") " pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.622180 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71d6bb6b-1211-4bbd-8946-2010438d6a5d-proxy-tls\") pod \"machine-config-daemon-wvw2c\" (UID: \"71d6bb6b-1211-4bbd-8946-2010438d6a5d\") " pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.630692 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.632729 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk9xs\" (UniqueName: \"kubernetes.io/projected/e54028d7-cdbb-4fa9-92cd-9570edacb888-kube-api-access-zk9xs\") pod \"multus-sllz5\" (UID: \"e54028d7-cdbb-4fa9-92cd-9570edacb888\") " pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.637416 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh2lt\" (UniqueName: \"kubernetes.io/projected/71d6bb6b-1211-4bbd-8946-2010438d6a5d-kube-api-access-zh2lt\") pod \"machine-config-daemon-wvw2c\" (UID: \"71d6bb6b-1211-4bbd-8946-2010438d6a5d\") " pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.642953 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.646897 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h672\" (UniqueName: \"kubernetes.io/projected/c6efc9d0-9c03-4235-ab59-96263c372e09-kube-api-access-8h672\") pod \"multus-additional-cni-plugins-gvtgv\" (UID: \"c6efc9d0-9c03-4235-ab59-96263c372e09\") " pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.659857 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.676783 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.696515 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.721995 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.738789 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.752052 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.763960 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.779254 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.794098 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.805855 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.813822 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.817078 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.817196 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:36 crc kubenswrapper[4693]: E1212 15:46:36.817227 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:46:38.817202393 +0000 UTC m=+25.985842004 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.817260 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:46:36 crc kubenswrapper[4693]: E1212 15:46:36.817312 4693 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.817318 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.817351 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:36 crc kubenswrapper[4693]: E1212 15:46:36.817402 4693 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:46:36 crc kubenswrapper[4693]: E1212 15:46:36.817441 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:38.817430769 +0000 UTC m=+25.986070370 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:46:36 crc kubenswrapper[4693]: E1212 15:46:36.817476 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:46:36 crc kubenswrapper[4693]: E1212 15:46:36.817506 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:46:36 crc kubenswrapper[4693]: E1212 15:46:36.817521 4693 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:36 crc kubenswrapper[4693]: E1212 15:46:36.817583 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:38.817566032 +0000 UTC m=+25.986205633 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:36 crc kubenswrapper[4693]: E1212 15:46:36.817811 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:46:36 crc kubenswrapper[4693]: E1212 15:46:36.817832 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:46:36 crc kubenswrapper[4693]: E1212 15:46:36.817844 4693 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:36 crc kubenswrapper[4693]: E1212 15:46:36.817890 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:38.81787818 +0000 UTC m=+25.986517781 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:36 crc kubenswrapper[4693]: E1212 15:46:36.817915 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:38.817908681 +0000 UTC m=+25.986548282 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.821062 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.825036 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.833079 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-sllz5" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.845388 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ps9gt"] Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.846367 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.849293 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.849315 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.849854 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.849946 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.850400 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.850503 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.850954 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.868925 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.894075 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.910520 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.919305 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fa7eae7d-b662-434d-96c1-de3080d579bd-ovnkube-config\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.919372 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fa7eae7d-b662-434d-96c1-de3080d579bd-ovnkube-script-lib\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.919407 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-systemd-units\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.919436 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.919467 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-cni-bin\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.919498 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-etc-openvswitch\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.919523 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwpht\" (UniqueName: \"kubernetes.io/projected/fa7eae7d-b662-434d-96c1-de3080d579bd-kube-api-access-dwpht\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.919567 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-run-ovn\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.919596 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-cni-netd\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.919624 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-var-lib-openvswitch\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.919655 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-kubelet\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.919679 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-run-ovn-kubernetes\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.919709 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fa7eae7d-b662-434d-96c1-de3080d579bd-ovn-node-metrics-cert\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.919734 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-slash\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.919757 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-run-netns\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.919785 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-node-log\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.919823 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-run-openvswitch\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.919856 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-run-systemd\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.919919 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-log-socket\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.919947 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fa7eae7d-b662-434d-96c1-de3080d579bd-env-overrides\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.930633 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.947152 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.960706 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.976323 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:36 crc kubenswrapper[4693]: I1212 15:46:36.990030 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:36Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.017238 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.020464 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-run-openvswitch\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.020498 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-run-systemd\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.020546 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-log-socket\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.020566 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fa7eae7d-b662-434d-96c1-de3080d579bd-env-overrides\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.020638 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-systemd-units\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.020688 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-run-openvswitch\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.020715 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-run-systemd\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.020582 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-systemd-units\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.020842 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-log-socket\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.023156 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.023217 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fa7eae7d-b662-434d-96c1-de3080d579bd-ovnkube-config\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.023245 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fa7eae7d-b662-434d-96c1-de3080d579bd-ovnkube-script-lib\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.023452 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fa7eae7d-b662-434d-96c1-de3080d579bd-env-overrides\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.023585 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-cni-bin\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.023686 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwpht\" (UniqueName: \"kubernetes.io/projected/fa7eae7d-b662-434d-96c1-de3080d579bd-kube-api-access-dwpht\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.023801 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-etc-openvswitch\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.023904 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-run-ovn\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.024012 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-cni-netd\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.024103 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-cni-bin\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.024081 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-etc-openvswitch\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.024242 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-var-lib-openvswitch\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.024374 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-kubelet\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.024431 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-cni-netd\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.023938 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.024469 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fa7eae7d-b662-434d-96c1-de3080d579bd-ovnkube-config\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.023904 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fa7eae7d-b662-434d-96c1-de3080d579bd-ovnkube-script-lib\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.024412 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-kubelet\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.024446 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-run-ovn-kubernetes\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.024376 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-var-lib-openvswitch\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.024340 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-run-ovn\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.024725 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-run-ovn-kubernetes\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.024805 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-slash\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.024888 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fa7eae7d-b662-434d-96c1-de3080d579bd-ovn-node-metrics-cert\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.024961 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-slash\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.027357 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-run-netns\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.027394 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-node-log\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.027459 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-node-log\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.027490 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-run-netns\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.031737 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fa7eae7d-b662-434d-96c1-de3080d579bd-ovn-node-metrics-cert\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.040540 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.051763 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwpht\" (UniqueName: \"kubernetes.io/projected/fa7eae7d-b662-434d-96c1-de3080d579bd-kube-api-access-dwpht\") pod \"ovnkube-node-ps9gt\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.065073 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.085748 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.110344 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.184701 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:37 crc kubenswrapper[4693]: W1212 15:46:37.197500 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa7eae7d_b662_434d_96c1_de3080d579bd.slice/crio-11f8bc93ddecaef42867a9f1162942a85913bddc365c797db22423b8bdf5e5aa WatchSource:0}: Error finding container 11f8bc93ddecaef42867a9f1162942a85913bddc365c797db22423b8bdf5e5aa: Status 404 returned error can't find the container with id 11f8bc93ddecaef42867a9f1162942a85913bddc365c797db22423b8bdf5e5aa Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.356624 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:46:37 crc kubenswrapper[4693]: E1212 15:46:37.356753 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.548917 4693 generic.go:334] "Generic (PLEG): container finished" podID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerID="4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b" exitCode=0 Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.548999 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerDied","Data":"4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b"} Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.549032 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerStarted","Data":"11f8bc93ddecaef42867a9f1162942a85913bddc365c797db22423b8bdf5e5aa"} Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.550620 4693 generic.go:334] "Generic (PLEG): container finished" podID="c6efc9d0-9c03-4235-ab59-96263c372e09" containerID="f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e" exitCode=0 Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.550683 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" event={"ID":"c6efc9d0-9c03-4235-ab59-96263c372e09","Type":"ContainerDied","Data":"f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e"} Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.550707 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" event={"ID":"c6efc9d0-9c03-4235-ab59-96263c372e09","Type":"ContainerStarted","Data":"e54df0b7c1a7679e84bcc8ea650a195f55dc42b56f210d59029a743aec57a270"} Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.553111 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerStarted","Data":"7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf"} Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.553150 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerStarted","Data":"37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28"} Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.553163 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerStarted","Data":"c5be2ab2621be89be5876fb0229caa5f7e6c3e68dfeffed6fa3dd4fffa485fb7"} Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.554822 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sllz5" event={"ID":"e54028d7-cdbb-4fa9-92cd-9570edacb888","Type":"ContainerStarted","Data":"44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc"} Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.554894 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sllz5" event={"ID":"e54028d7-cdbb-4fa9-92cd-9570edacb888","Type":"ContainerStarted","Data":"82ec2e63e589f0aa2abf1285c8230afee16f088414767fdbff20cdc0eb36e810"} Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.556195 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nth2b" event={"ID":"20c9fcf7-c537-47fe-9699-bc3d411dd964","Type":"ContainerStarted","Data":"ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915"} Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.565445 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.581597 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.595902 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.620107 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.633676 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.648598 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.661726 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.673696 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.686845 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.701595 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.714447 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.727551 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.748422 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.760136 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.771687 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.782487 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.793042 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.803553 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.814744 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.828329 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.865544 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.896551 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.937595 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:37 crc kubenswrapper[4693]: I1212 15:46:37.977710 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:37Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.016609 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.057463 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.356296 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.356349 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:46:38 crc kubenswrapper[4693]: E1212 15:46:38.356733 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:46:38 crc kubenswrapper[4693]: E1212 15:46:38.356830 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.566861 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerStarted","Data":"8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c"} Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.567232 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerStarted","Data":"ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e"} Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.567248 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerStarted","Data":"9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92"} Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.567262 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerStarted","Data":"54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48"} Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.567292 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerStarted","Data":"201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05"} Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.569448 4693 generic.go:334] "Generic (PLEG): container finished" podID="c6efc9d0-9c03-4235-ab59-96263c372e09" containerID="94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70" exitCode=0 Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.569533 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" event={"ID":"c6efc9d0-9c03-4235-ab59-96263c372e09","Type":"ContainerDied","Data":"94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70"} Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.571285 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f"} Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.584096 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.596630 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.613521 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.627589 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.639298 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.654829 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.668003 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.686585 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.702443 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.713809 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.727697 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.744779 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.757579 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.771878 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.786356 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.805477 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.816933 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.830888 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.845259 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.851373 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.851527 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.851564 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:38 crc kubenswrapper[4693]: E1212 15:46:38.851597 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:46:42.851561095 +0000 UTC m=+30.020200756 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.851660 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:46:38 crc kubenswrapper[4693]: E1212 15:46:38.851685 4693 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:46:38 crc kubenswrapper[4693]: E1212 15:46:38.851713 4693 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.851732 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:46:38 crc kubenswrapper[4693]: E1212 15:46:38.851741 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:42.85172927 +0000 UTC m=+30.020368871 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:46:38 crc kubenswrapper[4693]: E1212 15:46:38.851829 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:42.851811882 +0000 UTC m=+30.020451673 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:46:38 crc kubenswrapper[4693]: E1212 15:46:38.851878 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:46:38 crc kubenswrapper[4693]: E1212 15:46:38.851901 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:46:38 crc kubenswrapper[4693]: E1212 15:46:38.851907 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:46:38 crc kubenswrapper[4693]: E1212 15:46:38.851935 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:46:38 crc kubenswrapper[4693]: E1212 15:46:38.851950 4693 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:38 crc kubenswrapper[4693]: E1212 15:46:38.852007 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:42.851983836 +0000 UTC m=+30.020623437 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:38 crc kubenswrapper[4693]: E1212 15:46:38.851917 4693 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:38 crc kubenswrapper[4693]: E1212 15:46:38.852074 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:42.852061978 +0000 UTC m=+30.020701579 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.864377 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.896831 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.936788 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:38 crc kubenswrapper[4693]: I1212 15:46:38.974999 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:38Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.017618 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.058776 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.096025 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.356678 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:46:39 crc kubenswrapper[4693]: E1212 15:46:39.356815 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.576674 4693 generic.go:334] "Generic (PLEG): container finished" podID="c6efc9d0-9c03-4235-ab59-96263c372e09" containerID="922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3" exitCode=0 Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.576748 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" event={"ID":"c6efc9d0-9c03-4235-ab59-96263c372e09","Type":"ContainerDied","Data":"922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3"} Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.582103 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerStarted","Data":"1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a"} Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.590941 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.604087 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.618974 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.640427 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.660800 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.673811 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.687884 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.701867 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.714328 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.728190 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.744451 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.757929 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.771450 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.851643 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-fpnjv"] Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.851983 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-fpnjv" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.853772 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.853857 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.854397 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.855675 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.867755 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.879738 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.889217 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.900610 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.910807 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.936852 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.961616 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e063858d-709e-46eb-ab3a-c71ffd012b4a-host\") pod \"node-ca-fpnjv\" (UID: \"e063858d-709e-46eb-ab3a-c71ffd012b4a\") " pod="openshift-image-registry/node-ca-fpnjv" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.961743 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e063858d-709e-46eb-ab3a-c71ffd012b4a-serviceca\") pod \"node-ca-fpnjv\" (UID: \"e063858d-709e-46eb-ab3a-c71ffd012b4a\") " pod="openshift-image-registry/node-ca-fpnjv" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.961774 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99qql\" (UniqueName: \"kubernetes.io/projected/e063858d-709e-46eb-ab3a-c71ffd012b4a-kube-api-access-99qql\") pod \"node-ca-fpnjv\" (UID: \"e063858d-709e-46eb-ab3a-c71ffd012b4a\") " pod="openshift-image-registry/node-ca-fpnjv" Dec 12 15:46:39 crc kubenswrapper[4693]: I1212 15:46:39.975205 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:39Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.017958 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.063679 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e063858d-709e-46eb-ab3a-c71ffd012b4a-serviceca\") pod \"node-ca-fpnjv\" (UID: \"e063858d-709e-46eb-ab3a-c71ffd012b4a\") " pod="openshift-image-registry/node-ca-fpnjv" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.063737 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99qql\" (UniqueName: \"kubernetes.io/projected/e063858d-709e-46eb-ab3a-c71ffd012b4a-kube-api-access-99qql\") pod \"node-ca-fpnjv\" (UID: \"e063858d-709e-46eb-ab3a-c71ffd012b4a\") " pod="openshift-image-registry/node-ca-fpnjv" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.063775 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e063858d-709e-46eb-ab3a-c71ffd012b4a-host\") pod \"node-ca-fpnjv\" (UID: \"e063858d-709e-46eb-ab3a-c71ffd012b4a\") " pod="openshift-image-registry/node-ca-fpnjv" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.063852 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e063858d-709e-46eb-ab3a-c71ffd012b4a-host\") pod \"node-ca-fpnjv\" (UID: \"e063858d-709e-46eb-ab3a-c71ffd012b4a\") " pod="openshift-image-registry/node-ca-fpnjv" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.064007 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.065049 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e063858d-709e-46eb-ab3a-c71ffd012b4a-serviceca\") pod \"node-ca-fpnjv\" (UID: \"e063858d-709e-46eb-ab3a-c71ffd012b4a\") " pod="openshift-image-registry/node-ca-fpnjv" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.083615 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99qql\" (UniqueName: \"kubernetes.io/projected/e063858d-709e-46eb-ab3a-c71ffd012b4a-kube-api-access-99qql\") pod \"node-ca-fpnjv\" (UID: \"e063858d-709e-46eb-ab3a-c71ffd012b4a\") " pod="openshift-image-registry/node-ca-fpnjv" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.115872 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.155213 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.164606 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-fpnjv" Dec 12 15:46:40 crc kubenswrapper[4693]: W1212 15:46:40.177776 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode063858d_709e_46eb_ab3a_c71ffd012b4a.slice/crio-85538de412928cf53d0cb7b9dc5a4443e1c4644a678865db0cd275267a9e6545 WatchSource:0}: Error finding container 85538de412928cf53d0cb7b9dc5a4443e1c4644a678865db0cd275267a9e6545: Status 404 returned error can't find the container with id 85538de412928cf53d0cb7b9dc5a4443e1c4644a678865db0cd275267a9e6545 Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.195598 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.234086 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.275530 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.356841 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.356936 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:40 crc kubenswrapper[4693]: E1212 15:46:40.357011 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:46:40 crc kubenswrapper[4693]: E1212 15:46:40.357112 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.588586 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-fpnjv" event={"ID":"e063858d-709e-46eb-ab3a-c71ffd012b4a","Type":"ContainerStarted","Data":"f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa"} Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.588643 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-fpnjv" event={"ID":"e063858d-709e-46eb-ab3a-c71ffd012b4a","Type":"ContainerStarted","Data":"85538de412928cf53d0cb7b9dc5a4443e1c4644a678865db0cd275267a9e6545"} Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.591746 4693 generic.go:334] "Generic (PLEG): container finished" podID="c6efc9d0-9c03-4235-ab59-96263c372e09" containerID="3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c" exitCode=0 Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.591783 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" event={"ID":"c6efc9d0-9c03-4235-ab59-96263c372e09","Type":"ContainerDied","Data":"3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c"} Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.606667 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.619769 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.634327 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.645430 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.657243 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.677444 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.687318 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.698502 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.710363 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.722448 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.736293 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.757368 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.796809 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.834561 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.879449 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.955014 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.956016 4693 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.958491 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.958538 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.958553 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.958689 4693 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.976043 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:40Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.989261 4693 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.989545 4693 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.990698 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.990737 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.990748 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.990767 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:40 crc kubenswrapper[4693]: I1212 15:46:40.990779 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:40Z","lastTransitionTime":"2025-12-12T15:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:41 crc kubenswrapper[4693]: E1212 15:46:41.004653 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.009053 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.009089 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.009099 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.009115 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.009127 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:41Z","lastTransitionTime":"2025-12-12T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:41 crc kubenswrapper[4693]: E1212 15:46:41.020782 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.024031 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.024069 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.024080 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.024093 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.024102 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:41Z","lastTransitionTime":"2025-12-12T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:41 crc kubenswrapper[4693]: E1212 15:46:41.034405 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.035925 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.037051 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.037071 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.037079 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.037093 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.037103 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:41Z","lastTransitionTime":"2025-12-12T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:41 crc kubenswrapper[4693]: E1212 15:46:41.052484 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.055893 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.055928 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.055939 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.055955 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.055967 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:41Z","lastTransitionTime":"2025-12-12T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:41 crc kubenswrapper[4693]: E1212 15:46:41.066136 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: E1212 15:46:41.066250 4693 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.067652 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.067688 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.067700 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.067717 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.067728 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:41Z","lastTransitionTime":"2025-12-12T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.077146 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.116572 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.155832 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.170149 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.170192 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.170200 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.170214 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.170223 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:41Z","lastTransitionTime":"2025-12-12T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.202632 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.241877 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.272401 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.272447 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.272459 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.272477 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.272488 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:41Z","lastTransitionTime":"2025-12-12T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.278854 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.328538 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.354031 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.356378 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:46:41 crc kubenswrapper[4693]: E1212 15:46:41.356506 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.374122 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.374159 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.374169 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.374184 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.374195 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:41Z","lastTransitionTime":"2025-12-12T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.403451 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.435312 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.476483 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.476516 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.476523 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.476537 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.476546 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:41Z","lastTransitionTime":"2025-12-12T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.579451 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.579557 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.579590 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.579620 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.579643 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:41Z","lastTransitionTime":"2025-12-12T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.599721 4693 generic.go:334] "Generic (PLEG): container finished" podID="c6efc9d0-9c03-4235-ab59-96263c372e09" containerID="1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3" exitCode=0 Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.599794 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" event={"ID":"c6efc9d0-9c03-4235-ab59-96263c372e09","Type":"ContainerDied","Data":"1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3"} Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.606751 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerStarted","Data":"f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed"} Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.623984 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.643459 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.654404 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.665983 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.677980 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.681946 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.682000 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.682010 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.682025 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.682036 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:41Z","lastTransitionTime":"2025-12-12T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.690021 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.713609 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.755358 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.784057 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.784126 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.784140 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.784157 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.784188 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:41Z","lastTransitionTime":"2025-12-12T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.801341 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.835992 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.875805 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.886932 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.886986 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.887001 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.887023 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.887040 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:41Z","lastTransitionTime":"2025-12-12T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.915141 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.956294 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.989676 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.989715 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.989727 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.989743 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.989754 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:41Z","lastTransitionTime":"2025-12-12T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:41 crc kubenswrapper[4693]: I1212 15:46:41.995103 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.092263 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.092303 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.092312 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.092326 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.092335 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:42Z","lastTransitionTime":"2025-12-12T15:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.195089 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.195113 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.195121 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.195134 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.195143 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:42Z","lastTransitionTime":"2025-12-12T15:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.297332 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.297360 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.297368 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.297383 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.297392 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:42Z","lastTransitionTime":"2025-12-12T15:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.357075 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.357612 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:46:42 crc kubenswrapper[4693]: E1212 15:46:42.357766 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:46:42 crc kubenswrapper[4693]: E1212 15:46:42.358135 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.400135 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.400481 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.400665 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.400807 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.400925 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:42Z","lastTransitionTime":"2025-12-12T15:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.504239 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.504293 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.504307 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.504325 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.504339 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:42Z","lastTransitionTime":"2025-12-12T15:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.607151 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.607192 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.607201 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.607230 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.607241 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:42Z","lastTransitionTime":"2025-12-12T15:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.611647 4693 generic.go:334] "Generic (PLEG): container finished" podID="c6efc9d0-9c03-4235-ab59-96263c372e09" containerID="66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d" exitCode=0 Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.611699 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" event={"ID":"c6efc9d0-9c03-4235-ab59-96263c372e09","Type":"ContainerDied","Data":"66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d"} Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.634000 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.657487 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.674245 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.688163 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.701478 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.709504 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.709559 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.709569 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.709583 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.709593 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:42Z","lastTransitionTime":"2025-12-12T15:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.720234 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.733643 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.749481 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.762843 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.775779 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.792123 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.808460 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.813343 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.813396 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.813407 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.813427 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.813438 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:42Z","lastTransitionTime":"2025-12-12T15:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.823541 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.838918 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.892301 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.892411 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.892438 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.892456 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.892476 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:42 crc kubenswrapper[4693]: E1212 15:46:42.892543 4693 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:46:42 crc kubenswrapper[4693]: E1212 15:46:42.892588 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:50.892573412 +0000 UTC m=+38.061213013 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:46:42 crc kubenswrapper[4693]: E1212 15:46:42.892910 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:46:50.892902321 +0000 UTC m=+38.061541922 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:46:42 crc kubenswrapper[4693]: E1212 15:46:42.892985 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:46:42 crc kubenswrapper[4693]: E1212 15:46:42.893003 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:46:42 crc kubenswrapper[4693]: E1212 15:46:42.893013 4693 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:42 crc kubenswrapper[4693]: E1212 15:46:42.893036 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:50.893029834 +0000 UTC m=+38.061669435 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:42 crc kubenswrapper[4693]: E1212 15:46:42.893172 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:46:42 crc kubenswrapper[4693]: E1212 15:46:42.893215 4693 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:46:42 crc kubenswrapper[4693]: E1212 15:46:42.893228 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:46:42 crc kubenswrapper[4693]: E1212 15:46:42.893385 4693 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:42 crc kubenswrapper[4693]: E1212 15:46:42.893386 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:50.893350123 +0000 UTC m=+38.061989724 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:46:42 crc kubenswrapper[4693]: E1212 15:46:42.893564 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:50.893529527 +0000 UTC m=+38.062169248 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.916698 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.916748 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.916761 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.916785 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:42 crc kubenswrapper[4693]: I1212 15:46:42.916799 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:42Z","lastTransitionTime":"2025-12-12T15:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.019334 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.019408 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.019430 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.019458 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.019480 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:43Z","lastTransitionTime":"2025-12-12T15:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.122115 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.122153 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.122162 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.122175 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.122185 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:43Z","lastTransitionTime":"2025-12-12T15:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.224569 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.224614 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.224630 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.224647 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.224656 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:43Z","lastTransitionTime":"2025-12-12T15:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.327658 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.327707 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.327715 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.327731 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.327742 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:43Z","lastTransitionTime":"2025-12-12T15:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.357220 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:46:43 crc kubenswrapper[4693]: E1212 15:46:43.357559 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.370027 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.383504 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.395049 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.406605 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.417353 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.429197 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.430083 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.430120 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.430132 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.430148 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.430163 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:43Z","lastTransitionTime":"2025-12-12T15:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.438927 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.449030 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.465451 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.478130 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.492712 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.507219 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.516548 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.530452 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.534846 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.534873 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.534881 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.534893 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.534903 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:43Z","lastTransitionTime":"2025-12-12T15:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.636525 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.636555 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.636566 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.636580 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.636592 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:43Z","lastTransitionTime":"2025-12-12T15:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.740023 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.740052 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.740061 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.740075 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.740085 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:43Z","lastTransitionTime":"2025-12-12T15:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.843391 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.843458 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.843477 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.843500 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.843518 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:43Z","lastTransitionTime":"2025-12-12T15:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.946326 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.946371 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.946383 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.946401 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:43 crc kubenswrapper[4693]: I1212 15:46:43.946412 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:43Z","lastTransitionTime":"2025-12-12T15:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.049025 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.049080 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.049091 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.049115 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.049126 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:44Z","lastTransitionTime":"2025-12-12T15:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.151680 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.151720 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.151732 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.151748 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.151760 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:44Z","lastTransitionTime":"2025-12-12T15:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.254683 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.254735 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.254751 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.254777 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.254790 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:44Z","lastTransitionTime":"2025-12-12T15:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.356176 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.356252 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:46:44 crc kubenswrapper[4693]: E1212 15:46:44.356316 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:46:44 crc kubenswrapper[4693]: E1212 15:46:44.356412 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.357065 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.357090 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.357098 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.357110 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.357119 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:44Z","lastTransitionTime":"2025-12-12T15:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.459857 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.459910 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.459923 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.459937 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.459946 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:44Z","lastTransitionTime":"2025-12-12T15:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.561799 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.561843 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.561857 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.561870 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.561879 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:44Z","lastTransitionTime":"2025-12-12T15:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.620571 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerStarted","Data":"2c1cfbcf482678f9f4ab3dfa48a13d62bb48388b47f3965ecab7915e2f799dd5"} Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.620917 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.626021 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" event={"ID":"c6efc9d0-9c03-4235-ab59-96263c372e09","Type":"ContainerStarted","Data":"3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62"} Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.633994 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.648288 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.648458 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.661885 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.663572 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.663611 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.663624 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.663641 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.663653 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:44Z","lastTransitionTime":"2025-12-12T15:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.673857 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.686431 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.698143 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.707504 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.720024 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.738137 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1cfbcf482678f9f4ab3dfa48a13d62bb48388b47f3965ecab7915e2f799dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.748372 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.760756 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.770627 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.770661 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.770671 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.770686 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.770695 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:44Z","lastTransitionTime":"2025-12-12T15:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.780958 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.792031 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.807644 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.825743 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.838971 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.848958 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.860574 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.871422 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.872701 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.872740 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.872753 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.872770 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.872781 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:44Z","lastTransitionTime":"2025-12-12T15:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.883803 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.895489 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.908590 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.927000 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1cfbcf482678f9f4ab3dfa48a13d62bb48388b47f3965ecab7915e2f799dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.938980 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.948742 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.956566 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.970932 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.974426 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.974467 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.974499 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.974516 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.974526 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:44Z","lastTransitionTime":"2025-12-12T15:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:44 crc kubenswrapper[4693]: I1212 15:46:44.984558 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.077343 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.077405 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.077416 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.077435 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.077452 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:45Z","lastTransitionTime":"2025-12-12T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.180495 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.180548 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.180558 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.180578 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.180592 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:45Z","lastTransitionTime":"2025-12-12T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.283454 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.283500 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.283508 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.283523 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.283531 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:45Z","lastTransitionTime":"2025-12-12T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.356383 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:46:45 crc kubenswrapper[4693]: E1212 15:46:45.356521 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.386185 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.386242 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.386259 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.386309 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.386329 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:45Z","lastTransitionTime":"2025-12-12T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.489176 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.489206 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.489215 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.489227 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.489236 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:45Z","lastTransitionTime":"2025-12-12T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.591622 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.591687 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.591706 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.591728 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.591746 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:45Z","lastTransitionTime":"2025-12-12T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.629243 4693 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.629760 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.652489 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.668361 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:45Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.682200 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:45Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.693447 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:45Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.694019 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.694072 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.694098 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.694121 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.694137 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:45Z","lastTransitionTime":"2025-12-12T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.708350 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:45Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.734689 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1cfbcf482678f9f4ab3dfa48a13d62bb48388b47f3965ecab7915e2f799dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:45Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.753557 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:45Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.767615 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:45Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.786584 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:45Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.796198 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.796245 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.796255 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.796286 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.796299 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:45Z","lastTransitionTime":"2025-12-12T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.807177 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:45Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.823430 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:45Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.850811 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:45Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.874063 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:45Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.885504 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:45Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.898425 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:45Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.898890 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.898926 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.898936 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.898949 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:45 crc kubenswrapper[4693]: I1212 15:46:45.898958 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:45Z","lastTransitionTime":"2025-12-12T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.002033 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.002076 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.002092 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.002107 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.002118 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:46Z","lastTransitionTime":"2025-12-12T15:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.104811 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.104878 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.104892 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.104908 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.104917 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:46Z","lastTransitionTime":"2025-12-12T15:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.207798 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.207842 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.207853 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.207870 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.207882 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:46Z","lastTransitionTime":"2025-12-12T15:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.310603 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.310644 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.310654 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.310668 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.310679 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:46Z","lastTransitionTime":"2025-12-12T15:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.356479 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.356549 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:46 crc kubenswrapper[4693]: E1212 15:46:46.356617 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:46:46 crc kubenswrapper[4693]: E1212 15:46:46.356739 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.413312 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.413568 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.413582 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.413597 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.413607 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:46Z","lastTransitionTime":"2025-12-12T15:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.516610 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.516665 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.516680 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.516699 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.516716 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:46Z","lastTransitionTime":"2025-12-12T15:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.619399 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.619466 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.619479 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.619501 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.619515 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:46Z","lastTransitionTime":"2025-12-12T15:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.631997 4693 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.722173 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.722212 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.722221 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.722234 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.722242 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:46Z","lastTransitionTime":"2025-12-12T15:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.825626 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.825715 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.825743 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.825776 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.825798 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:46Z","lastTransitionTime":"2025-12-12T15:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.928132 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.928173 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.928186 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.928199 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:46 crc kubenswrapper[4693]: I1212 15:46:46.928209 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:46Z","lastTransitionTime":"2025-12-12T15:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.031186 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.031309 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.031338 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.031369 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.031391 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:47Z","lastTransitionTime":"2025-12-12T15:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.134251 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.134335 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.134347 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.134364 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.134376 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:47Z","lastTransitionTime":"2025-12-12T15:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.236973 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.237034 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.237052 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.237075 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.237091 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:47Z","lastTransitionTime":"2025-12-12T15:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.339264 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.339329 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.339337 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.339350 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.339359 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:47Z","lastTransitionTime":"2025-12-12T15:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.356114 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:46:47 crc kubenswrapper[4693]: E1212 15:46:47.356238 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.442081 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.442121 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.442133 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.442151 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.442163 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:47Z","lastTransitionTime":"2025-12-12T15:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.544965 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.545047 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.545071 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.545107 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.545132 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:47Z","lastTransitionTime":"2025-12-12T15:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.637380 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ps9gt_fa7eae7d-b662-434d-96c1-de3080d579bd/ovnkube-controller/0.log" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.641808 4693 generic.go:334] "Generic (PLEG): container finished" podID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerID="2c1cfbcf482678f9f4ab3dfa48a13d62bb48388b47f3965ecab7915e2f799dd5" exitCode=1 Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.644393 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerDied","Data":"2c1cfbcf482678f9f4ab3dfa48a13d62bb48388b47f3965ecab7915e2f799dd5"} Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.644576 4693 scope.go:117] "RemoveContainer" containerID="2c1cfbcf482678f9f4ab3dfa48a13d62bb48388b47f3965ecab7915e2f799dd5" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.649150 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.649209 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.649222 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.649240 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.649252 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:47Z","lastTransitionTime":"2025-12-12T15:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.662852 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:47Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.676138 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:47Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.688798 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:47Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.702087 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:47Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.720965 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:47Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.736404 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:47Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.751459 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.751988 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.752407 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.752711 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.752835 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:47Z","lastTransitionTime":"2025-12-12T15:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.753531 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:47Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.767890 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:47Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.791649 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:47Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.819314 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1cfbcf482678f9f4ab3dfa48a13d62bb48388b47f3965ecab7915e2f799dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1cfbcf482678f9f4ab3dfa48a13d62bb48388b47f3965ecab7915e2f799dd5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:46:47Z\\\",\\\"message\\\":\\\"kPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 15:46:46.372543 5994 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 15:46:46.372655 5994 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 15:46:46.372911 5994 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1212 15:46:46.372946 5994 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 15:46:46.372983 5994 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1212 15:46:46.372990 5994 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1212 15:46:46.373006 5994 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 15:46:46.373006 5994 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1212 15:46:46.373014 5994 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 15:46:46.373028 5994 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1212 15:46:46.373032 5994 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 15:46:46.373051 5994 factory.go:656] Stopping watch factory\\\\nI1212 15:46:46.373091 5994 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:47Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.836913 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:47Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.855402 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:47Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.856351 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.856475 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.856570 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.856668 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.856755 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:47Z","lastTransitionTime":"2025-12-12T15:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.870929 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:47Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.890250 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:47Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.959705 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.959741 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.959750 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.959762 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:47 crc kubenswrapper[4693]: I1212 15:46:47.959771 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:47Z","lastTransitionTime":"2025-12-12T15:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.061668 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.061945 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.062026 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.062086 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.062141 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:48Z","lastTransitionTime":"2025-12-12T15:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.164550 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.164589 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.164599 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.164612 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.164621 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:48Z","lastTransitionTime":"2025-12-12T15:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.266972 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.266998 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.267007 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.267020 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.267029 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:48Z","lastTransitionTime":"2025-12-12T15:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.356950 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.356952 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:48 crc kubenswrapper[4693]: E1212 15:46:48.357115 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:46:48 crc kubenswrapper[4693]: E1212 15:46:48.357293 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.369418 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.369667 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.369748 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.369824 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.369898 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:48Z","lastTransitionTime":"2025-12-12T15:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.473320 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.473766 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.473937 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.474022 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.474113 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:48Z","lastTransitionTime":"2025-12-12T15:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.577430 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.577463 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.577470 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.577484 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.577494 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:48Z","lastTransitionTime":"2025-12-12T15:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.651935 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ps9gt_fa7eae7d-b662-434d-96c1-de3080d579bd/ovnkube-controller/0.log" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.655197 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerStarted","Data":"a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c"} Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.655363 4693 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.671820 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.679533 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.679576 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.679585 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.679598 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.679610 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:48Z","lastTransitionTime":"2025-12-12T15:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.685886 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.698405 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.714321 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.727669 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.741178 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.759292 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6"] Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.759763 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.760295 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.761744 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.762489 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.776130 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.781670 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.781719 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.781734 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.781755 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.781774 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:48Z","lastTransitionTime":"2025-12-12T15:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.794923 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1cfbcf482678f9f4ab3dfa48a13d62bb48388b47f3965ecab7915e2f799dd5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:46:47Z\\\",\\\"message\\\":\\\"kPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 15:46:46.372543 5994 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 15:46:46.372655 5994 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 15:46:46.372911 5994 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1212 15:46:46.372946 5994 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 15:46:46.372983 5994 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1212 15:46:46.372990 5994 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1212 15:46:46.373006 5994 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 15:46:46.373006 5994 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1212 15:46:46.373014 5994 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 15:46:46.373028 5994 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1212 15:46:46.373032 5994 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 15:46:46.373051 5994 factory.go:656] Stopping watch factory\\\\nI1212 15:46:46.373091 5994 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.806614 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.818600 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.832657 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.844571 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.854405 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92hsj\" (UniqueName: \"kubernetes.io/projected/fd0124f2-8890-495e-919d-da02af9ecd6f-kube-api-access-92hsj\") pod \"ovnkube-control-plane-749d76644c-bjdt6\" (UID: \"fd0124f2-8890-495e-919d-da02af9ecd6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.854565 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fd0124f2-8890-495e-919d-da02af9ecd6f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bjdt6\" (UID: \"fd0124f2-8890-495e-919d-da02af9ecd6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.854627 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fd0124f2-8890-495e-919d-da02af9ecd6f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bjdt6\" (UID: \"fd0124f2-8890-495e-919d-da02af9ecd6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.854707 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fd0124f2-8890-495e-919d-da02af9ecd6f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bjdt6\" (UID: \"fd0124f2-8890-495e-919d-da02af9ecd6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.857969 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.871174 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.885144 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.885187 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.885200 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.885219 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.885233 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:48Z","lastTransitionTime":"2025-12-12T15:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.889285 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.901399 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.912882 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.928698 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.944779 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.955816 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92hsj\" (UniqueName: \"kubernetes.io/projected/fd0124f2-8890-495e-919d-da02af9ecd6f-kube-api-access-92hsj\") pod \"ovnkube-control-plane-749d76644c-bjdt6\" (UID: \"fd0124f2-8890-495e-919d-da02af9ecd6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.955887 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fd0124f2-8890-495e-919d-da02af9ecd6f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bjdt6\" (UID: \"fd0124f2-8890-495e-919d-da02af9ecd6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.955914 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fd0124f2-8890-495e-919d-da02af9ecd6f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bjdt6\" (UID: \"fd0124f2-8890-495e-919d-da02af9ecd6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.955949 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fd0124f2-8890-495e-919d-da02af9ecd6f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bjdt6\" (UID: \"fd0124f2-8890-495e-919d-da02af9ecd6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.957232 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fd0124f2-8890-495e-919d-da02af9ecd6f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bjdt6\" (UID: \"fd0124f2-8890-495e-919d-da02af9ecd6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.957439 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fd0124f2-8890-495e-919d-da02af9ecd6f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bjdt6\" (UID: \"fd0124f2-8890-495e-919d-da02af9ecd6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.962949 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fd0124f2-8890-495e-919d-da02af9ecd6f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bjdt6\" (UID: \"fd0124f2-8890-495e-919d-da02af9ecd6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.969612 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.980522 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92hsj\" (UniqueName: \"kubernetes.io/projected/fd0124f2-8890-495e-919d-da02af9ecd6f-kube-api-access-92hsj\") pod \"ovnkube-control-plane-749d76644c-bjdt6\" (UID: \"fd0124f2-8890-495e-919d-da02af9ecd6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.988054 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.988095 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.988106 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.988122 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.988134 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:48Z","lastTransitionTime":"2025-12-12T15:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:48 crc kubenswrapper[4693]: I1212 15:46:48.992558 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1cfbcf482678f9f4ab3dfa48a13d62bb48388b47f3965ecab7915e2f799dd5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:46:47Z\\\",\\\"message\\\":\\\"kPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 15:46:46.372543 5994 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 15:46:46.372655 5994 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 15:46:46.372911 5994 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1212 15:46:46.372946 5994 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 15:46:46.372983 5994 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1212 15:46:46.372990 5994 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1212 15:46:46.373006 5994 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 15:46:46.373006 5994 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1212 15:46:46.373014 5994 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 15:46:46.373028 5994 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1212 15:46:46.373032 5994 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 15:46:46.373051 5994 factory.go:656] Stopping watch factory\\\\nI1212 15:46:46.373091 5994 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.004413 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.017069 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.028463 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.039246 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.054113 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.064722 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0124f2-8890-495e-919d-da02af9ecd6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjdt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.072360 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.081473 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: W1212 15:46:49.084349 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd0124f2_8890_495e_919d_da02af9ecd6f.slice/crio-6d16b99b1e6114f45e8a3d71868cace99944954e361e352c8eb5acad220913d5 WatchSource:0}: Error finding container 6d16b99b1e6114f45e8a3d71868cace99944954e361e352c8eb5acad220913d5: Status 404 returned error can't find the container with id 6d16b99b1e6114f45e8a3d71868cace99944954e361e352c8eb5acad220913d5 Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.090177 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.090372 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.090451 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.090520 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.090582 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:49Z","lastTransitionTime":"2025-12-12T15:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.192622 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.192931 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.192939 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.192951 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.192960 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:49Z","lastTransitionTime":"2025-12-12T15:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.295393 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.295446 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.295464 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.295519 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.295534 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:49Z","lastTransitionTime":"2025-12-12T15:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.356349 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:46:49 crc kubenswrapper[4693]: E1212 15:46:49.356479 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.399238 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.399317 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.399335 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.399359 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.399375 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:49Z","lastTransitionTime":"2025-12-12T15:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.505660 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.505749 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.505803 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.505908 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.505955 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:49Z","lastTransitionTime":"2025-12-12T15:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.610528 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.610586 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.610602 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.610624 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.610641 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:49Z","lastTransitionTime":"2025-12-12T15:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.661939 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ps9gt_fa7eae7d-b662-434d-96c1-de3080d579bd/ovnkube-controller/1.log" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.662745 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ps9gt_fa7eae7d-b662-434d-96c1-de3080d579bd/ovnkube-controller/0.log" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.666812 4693 generic.go:334] "Generic (PLEG): container finished" podID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerID="a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c" exitCode=1 Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.666900 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerDied","Data":"a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c"} Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.667341 4693 scope.go:117] "RemoveContainer" containerID="2c1cfbcf482678f9f4ab3dfa48a13d62bb48388b47f3965ecab7915e2f799dd5" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.668362 4693 scope.go:117] "RemoveContainer" containerID="a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c" Dec 12 15:46:49 crc kubenswrapper[4693]: E1212 15:46:49.668737 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.670575 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" event={"ID":"fd0124f2-8890-495e-919d-da02af9ecd6f","Type":"ContainerStarted","Data":"66ad52957967efb3497de12a094e81ca9ffc7fc6fb88705e9d16ac22319711e4"} Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.670674 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" event={"ID":"fd0124f2-8890-495e-919d-da02af9ecd6f","Type":"ContainerStarted","Data":"e6ee772252ca6daf992f916cf2f4fba993106d436c8a192a37b1cf81080c5342"} Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.670693 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" event={"ID":"fd0124f2-8890-495e-919d-da02af9ecd6f","Type":"ContainerStarted","Data":"6d16b99b1e6114f45e8a3d71868cace99944954e361e352c8eb5acad220913d5"} Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.692423 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.711669 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.713302 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.713363 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.713379 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.713405 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.713418 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:49Z","lastTransitionTime":"2025-12-12T15:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.726869 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.746081 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.768919 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0124f2-8890-495e-919d-da02af9ecd6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjdt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.787540 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.808080 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.816234 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.816298 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.816311 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.816331 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.816344 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:49Z","lastTransitionTime":"2025-12-12T15:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.826821 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.843662 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.860495 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.874101 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.881319 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.887991 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.912349 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1cfbcf482678f9f4ab3dfa48a13d62bb48388b47f3965ecab7915e2f799dd5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:46:47Z\\\",\\\"message\\\":\\\"kPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 15:46:46.372543 5994 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 15:46:46.372655 5994 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 15:46:46.372911 5994 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1212 15:46:46.372946 5994 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 15:46:46.372983 5994 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1212 15:46:46.372990 5994 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1212 15:46:46.373006 5994 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 15:46:46.373006 5994 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1212 15:46:46.373014 5994 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 15:46:46.373028 5994 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1212 15:46:46.373032 5994 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 15:46:46.373051 5994 factory.go:656] Stopping watch factory\\\\nI1212 15:46:46.373091 5994 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"message\\\":\\\"v openshift-network-node-identity/network-node-identity-vrzqb]\\\\nI1212 15:46:48.971982 6135 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1212 15:46:48.971865 6135 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z]\\\\nI1212 15:46:48.971989 6135 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.918527 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.918577 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.918588 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.918602 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.918616 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:49Z","lastTransitionTime":"2025-12-12T15:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.927119 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.939813 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.951411 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.965487 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.977467 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:49 crc kubenswrapper[4693]: I1212 15:46:49.988923 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:49Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.011893 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1cfbcf482678f9f4ab3dfa48a13d62bb48388b47f3965ecab7915e2f799dd5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:46:47Z\\\",\\\"message\\\":\\\"kPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 15:46:46.372543 5994 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 15:46:46.372655 5994 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 15:46:46.372911 5994 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1212 15:46:46.372946 5994 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 15:46:46.372983 5994 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1212 15:46:46.372990 5994 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1212 15:46:46.373006 5994 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 15:46:46.373006 5994 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1212 15:46:46.373014 5994 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 15:46:46.373028 5994 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1212 15:46:46.373032 5994 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 15:46:46.373051 5994 factory.go:656] Stopping watch factory\\\\nI1212 15:46:46.373091 5994 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"message\\\":\\\"v openshift-network-node-identity/network-node-identity-vrzqb]\\\\nI1212 15:46:48.971982 6135 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1212 15:46:48.971865 6135 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z]\\\\nI1212 15:46:48.971989 6135 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.020738 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.020771 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.020779 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.020793 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.020802 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:50Z","lastTransitionTime":"2025-12-12T15:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.022757 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.034164 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.044881 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.055466 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.072716 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.083872 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0124f2-8890-495e-919d-da02af9ecd6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6ee772252ca6daf992f916cf2f4fba993106d436c8a192a37b1cf81080c5342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ad52957967efb3497de12a094e81ca9ffc7fc6fb88705e9d16ac22319711e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjdt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.095343 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.109165 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.122335 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.123642 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.123673 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.123682 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.123697 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.123706 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:50Z","lastTransitionTime":"2025-12-12T15:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.135941 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.225857 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.225890 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.225899 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.225913 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.225923 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:50Z","lastTransitionTime":"2025-12-12T15:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.247899 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-w4zs6"] Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.248541 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:46:50 crc kubenswrapper[4693]: E1212 15:46:50.248682 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.263543 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.276179 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.288712 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.307556 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1cfbcf482678f9f4ab3dfa48a13d62bb48388b47f3965ecab7915e2f799dd5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:46:47Z\\\",\\\"message\\\":\\\"kPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 15:46:46.372543 5994 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 15:46:46.372655 5994 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1212 15:46:46.372911 5994 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1212 15:46:46.372946 5994 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 15:46:46.372983 5994 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1212 15:46:46.372990 5994 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1212 15:46:46.373006 5994 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 15:46:46.373006 5994 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1212 15:46:46.373014 5994 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 15:46:46.373028 5994 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1212 15:46:46.373032 5994 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 15:46:46.373051 5994 factory.go:656] Stopping watch factory\\\\nI1212 15:46:46.373091 5994 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"message\\\":\\\"v openshift-network-node-identity/network-node-identity-vrzqb]\\\\nI1212 15:46:48.971982 6135 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1212 15:46:48.971865 6135 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z]\\\\nI1212 15:46:48.971989 6135 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.320972 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.329807 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.329913 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.329924 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.330475 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.330489 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:50Z","lastTransitionTime":"2025-12-12T15:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.335674 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.349222 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.356215 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.356215 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:50 crc kubenswrapper[4693]: E1212 15:46:50.356371 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:46:50 crc kubenswrapper[4693]: E1212 15:46:50.356413 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.360550 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.369672 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2wcb\" (UniqueName: \"kubernetes.io/projected/6ef3804b-c2b3-4645-b60f-9bc977a89f69-kube-api-access-r2wcb\") pod \"network-metrics-daemon-w4zs6\" (UID: \"6ef3804b-c2b3-4645-b60f-9bc977a89f69\") " pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.369807 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs\") pod \"network-metrics-daemon-w4zs6\" (UID: \"6ef3804b-c2b3-4645-b60f-9bc977a89f69\") " pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.375284 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.389667 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0124f2-8890-495e-919d-da02af9ecd6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6ee772252ca6daf992f916cf2f4fba993106d436c8a192a37b1cf81080c5342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ad52957967efb3497de12a094e81ca9ffc7fc6fb88705e9d16ac22319711e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjdt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.403841 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.418426 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.429892 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.432518 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.432555 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.432564 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.432577 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.432586 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:50Z","lastTransitionTime":"2025-12-12T15:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.440339 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w4zs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef3804b-c2b3-4645-b60f-9bc977a89f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w4zs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.454316 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.465381 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.471075 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs\") pod \"network-metrics-daemon-w4zs6\" (UID: \"6ef3804b-c2b3-4645-b60f-9bc977a89f69\") " pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.471212 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2wcb\" (UniqueName: \"kubernetes.io/projected/6ef3804b-c2b3-4645-b60f-9bc977a89f69-kube-api-access-r2wcb\") pod \"network-metrics-daemon-w4zs6\" (UID: \"6ef3804b-c2b3-4645-b60f-9bc977a89f69\") " pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:46:50 crc kubenswrapper[4693]: E1212 15:46:50.471307 4693 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:46:50 crc kubenswrapper[4693]: E1212 15:46:50.471446 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs podName:6ef3804b-c2b3-4645-b60f-9bc977a89f69 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:50.971388655 +0000 UTC m=+38.140028296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs") pod "network-metrics-daemon-w4zs6" (UID: "6ef3804b-c2b3-4645-b60f-9bc977a89f69") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.488568 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2wcb\" (UniqueName: \"kubernetes.io/projected/6ef3804b-c2b3-4645-b60f-9bc977a89f69-kube-api-access-r2wcb\") pod \"network-metrics-daemon-w4zs6\" (UID: \"6ef3804b-c2b3-4645-b60f-9bc977a89f69\") " pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.535318 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.535460 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.535490 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.535520 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.535543 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:50Z","lastTransitionTime":"2025-12-12T15:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.637167 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.637206 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.637217 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.637232 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.637245 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:50Z","lastTransitionTime":"2025-12-12T15:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.675926 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ps9gt_fa7eae7d-b662-434d-96c1-de3080d579bd/ovnkube-controller/1.log" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.679196 4693 scope.go:117] "RemoveContainer" containerID="a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c" Dec 12 15:46:50 crc kubenswrapper[4693]: E1212 15:46:50.679356 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.691241 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.702981 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.721347 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"message\\\":\\\"v openshift-network-node-identity/network-node-identity-vrzqb]\\\\nI1212 15:46:48.971982 6135 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1212 15:46:48.971865 6135 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z]\\\\nI1212 15:46:48.971989 6135 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.733043 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.739577 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.739619 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.739651 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.739666 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.739675 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:50Z","lastTransitionTime":"2025-12-12T15:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.746451 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.756673 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.769330 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.781220 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.793170 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.805121 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.819677 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.830002 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.842383 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.842422 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.842430 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.842443 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.842452 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:50Z","lastTransitionTime":"2025-12-12T15:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.844840 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.855076 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0124f2-8890-495e-919d-da02af9ecd6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6ee772252ca6daf992f916cf2f4fba993106d436c8a192a37b1cf81080c5342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ad52957967efb3497de12a094e81ca9ffc7fc6fb88705e9d16ac22319711e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjdt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.866551 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.876357 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w4zs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef3804b-c2b3-4645-b60f-9bc977a89f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w4zs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:50Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.944565 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.944605 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.944615 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.944631 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.944642 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:50Z","lastTransitionTime":"2025-12-12T15:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.976038 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.976126 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.976155 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.976175 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.976197 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs\") pod \"network-metrics-daemon-w4zs6\" (UID: \"6ef3804b-c2b3-4645-b60f-9bc977a89f69\") " pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:46:50 crc kubenswrapper[4693]: I1212 15:46:50.976215 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:50 crc kubenswrapper[4693]: E1212 15:46:50.976301 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:47:06.976242249 +0000 UTC m=+54.144881850 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:46:50 crc kubenswrapper[4693]: E1212 15:46:50.976306 4693 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:46:50 crc kubenswrapper[4693]: E1212 15:46:50.976353 4693 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:46:50 crc kubenswrapper[4693]: E1212 15:46:50.976362 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 15:47:06.976356212 +0000 UTC m=+54.144995813 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:46:50 crc kubenswrapper[4693]: E1212 15:46:50.976397 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 15:47:06.976383063 +0000 UTC m=+54.145022664 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:46:50 crc kubenswrapper[4693]: E1212 15:46:50.976436 4693 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:46:50 crc kubenswrapper[4693]: E1212 15:46:50.976454 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs podName:6ef3804b-c2b3-4645-b60f-9bc977a89f69 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:51.976448045 +0000 UTC m=+39.145087646 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs") pod "network-metrics-daemon-w4zs6" (UID: "6ef3804b-c2b3-4645-b60f-9bc977a89f69") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:46:50 crc kubenswrapper[4693]: E1212 15:46:50.976488 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:46:50 crc kubenswrapper[4693]: E1212 15:46:50.976492 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:46:50 crc kubenswrapper[4693]: E1212 15:46:50.976526 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:46:50 crc kubenswrapper[4693]: E1212 15:46:50.976542 4693 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:50 crc kubenswrapper[4693]: E1212 15:46:50.976576 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 15:47:06.976565038 +0000 UTC m=+54.145204699 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:50 crc kubenswrapper[4693]: E1212 15:46:50.976506 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:46:50 crc kubenswrapper[4693]: E1212 15:46:50.976609 4693 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:50 crc kubenswrapper[4693]: E1212 15:46:50.976650 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 15:47:06.97663678 +0000 UTC m=+54.145276381 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.047101 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.047139 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.047148 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.047166 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.047177 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:51Z","lastTransitionTime":"2025-12-12T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.150508 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.150577 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.150615 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.150649 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.150674 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:51Z","lastTransitionTime":"2025-12-12T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.254253 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.254387 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.254421 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.254450 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.254473 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:51Z","lastTransitionTime":"2025-12-12T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.356294 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:46:51 crc kubenswrapper[4693]: E1212 15:46:51.356418 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.357795 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.357823 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.357834 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.357849 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.357860 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:51Z","lastTransitionTime":"2025-12-12T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.386805 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.386872 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.386889 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.386916 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.386934 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:51Z","lastTransitionTime":"2025-12-12T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:51 crc kubenswrapper[4693]: E1212 15:46:51.402812 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:51Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.407769 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.407814 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.407829 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.407864 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.407879 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:51Z","lastTransitionTime":"2025-12-12T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:51 crc kubenswrapper[4693]: E1212 15:46:51.424492 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:51Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.428790 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.428814 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.428822 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.428834 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.428843 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:51Z","lastTransitionTime":"2025-12-12T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:51 crc kubenswrapper[4693]: E1212 15:46:51.447891 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:51Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.452209 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.452235 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.452243 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.452255 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.452263 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:51Z","lastTransitionTime":"2025-12-12T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:51 crc kubenswrapper[4693]: E1212 15:46:51.468041 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:51Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.472585 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.472642 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.472655 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.472673 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.472690 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:51Z","lastTransitionTime":"2025-12-12T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:51 crc kubenswrapper[4693]: E1212 15:46:51.487970 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:51Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:51 crc kubenswrapper[4693]: E1212 15:46:51.488092 4693 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.489594 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.489637 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.489648 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.489664 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.489676 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:51Z","lastTransitionTime":"2025-12-12T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.592391 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.592441 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.592453 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.592471 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.592483 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:51Z","lastTransitionTime":"2025-12-12T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.695747 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.695797 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.695808 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.695825 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.695836 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:51Z","lastTransitionTime":"2025-12-12T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.797998 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.798047 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.798062 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.798084 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.798100 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:51Z","lastTransitionTime":"2025-12-12T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.900897 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.900939 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.900949 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.900962 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.900971 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:51Z","lastTransitionTime":"2025-12-12T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:51 crc kubenswrapper[4693]: I1212 15:46:51.984899 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs\") pod \"network-metrics-daemon-w4zs6\" (UID: \"6ef3804b-c2b3-4645-b60f-9bc977a89f69\") " pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:46:51 crc kubenswrapper[4693]: E1212 15:46:51.985106 4693 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:46:51 crc kubenswrapper[4693]: E1212 15:46:51.985227 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs podName:6ef3804b-c2b3-4645-b60f-9bc977a89f69 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:53.985196617 +0000 UTC m=+41.153836258 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs") pod "network-metrics-daemon-w4zs6" (UID: "6ef3804b-c2b3-4645-b60f-9bc977a89f69") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.004502 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.004533 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.004544 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.004560 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.004572 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:52Z","lastTransitionTime":"2025-12-12T15:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.107093 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.107129 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.107138 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.107151 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.107162 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:52Z","lastTransitionTime":"2025-12-12T15:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.209793 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.209830 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.209838 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.209852 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.209860 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:52Z","lastTransitionTime":"2025-12-12T15:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.312587 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.312652 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.312678 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.312709 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.312732 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:52Z","lastTransitionTime":"2025-12-12T15:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.357064 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.357144 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.357073 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:46:52 crc kubenswrapper[4693]: E1212 15:46:52.357238 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:46:52 crc kubenswrapper[4693]: E1212 15:46:52.357517 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:46:52 crc kubenswrapper[4693]: E1212 15:46:52.357664 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.415176 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.415249 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.415319 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.415349 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.415369 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:52Z","lastTransitionTime":"2025-12-12T15:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.518385 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.518439 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.518457 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.518480 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.518497 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:52Z","lastTransitionTime":"2025-12-12T15:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.621932 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.622001 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.622014 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.622029 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.622039 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:52Z","lastTransitionTime":"2025-12-12T15:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.724844 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.724890 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.724899 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.724914 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.724924 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:52Z","lastTransitionTime":"2025-12-12T15:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.827721 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.827779 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.827788 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.827801 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.827812 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:52Z","lastTransitionTime":"2025-12-12T15:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.931290 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.931341 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.931357 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.931373 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:52 crc kubenswrapper[4693]: I1212 15:46:52.931385 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:52Z","lastTransitionTime":"2025-12-12T15:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.034143 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.034193 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.034204 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.034219 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.034232 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:53Z","lastTransitionTime":"2025-12-12T15:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.137240 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.137331 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.137355 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.137380 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.137401 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:53Z","lastTransitionTime":"2025-12-12T15:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.240923 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.240979 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.240989 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.241007 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.241017 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:53Z","lastTransitionTime":"2025-12-12T15:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.343137 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.343182 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.343192 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.343211 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.343221 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:53Z","lastTransitionTime":"2025-12-12T15:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.356038 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:46:53 crc kubenswrapper[4693]: E1212 15:46:53.356191 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.367898 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.383364 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.397882 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0124f2-8890-495e-919d-da02af9ecd6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6ee772252ca6daf992f916cf2f4fba993106d436c8a192a37b1cf81080c5342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ad52957967efb3497de12a094e81ca9ffc7fc6fb88705e9d16ac22319711e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjdt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.413021 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.424468 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.436518 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.445664 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.445712 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.445726 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.445743 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.445756 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:53Z","lastTransitionTime":"2025-12-12T15:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.448624 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w4zs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef3804b-c2b3-4645-b60f-9bc977a89f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w4zs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.461743 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.474722 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.488678 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.499859 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.523058 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.548668 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.548716 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.548729 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.548745 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.548758 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:53Z","lastTransitionTime":"2025-12-12T15:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.557781 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"message\\\":\\\"v openshift-network-node-identity/network-node-identity-vrzqb]\\\\nI1212 15:46:48.971982 6135 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1212 15:46:48.971865 6135 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z]\\\\nI1212 15:46:48.971989 6135 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.569102 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.583052 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.595410 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.651573 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.651631 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.651647 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.651667 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.651682 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:53Z","lastTransitionTime":"2025-12-12T15:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.754815 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.754920 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.754938 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.754961 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.754978 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:53Z","lastTransitionTime":"2025-12-12T15:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.858203 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.858483 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.858497 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.858514 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.858525 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:53Z","lastTransitionTime":"2025-12-12T15:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.961594 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.961654 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.961667 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.961685 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:53 crc kubenswrapper[4693]: I1212 15:46:53.961697 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:53Z","lastTransitionTime":"2025-12-12T15:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.008552 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs\") pod \"network-metrics-daemon-w4zs6\" (UID: \"6ef3804b-c2b3-4645-b60f-9bc977a89f69\") " pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:46:54 crc kubenswrapper[4693]: E1212 15:46:54.008703 4693 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:46:54 crc kubenswrapper[4693]: E1212 15:46:54.008763 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs podName:6ef3804b-c2b3-4645-b60f-9bc977a89f69 nodeName:}" failed. No retries permitted until 2025-12-12 15:46:58.008748781 +0000 UTC m=+45.177388372 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs") pod "network-metrics-daemon-w4zs6" (UID: "6ef3804b-c2b3-4645-b60f-9bc977a89f69") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.064426 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.064511 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.064532 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.064562 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.064584 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:54Z","lastTransitionTime":"2025-12-12T15:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.167520 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.167591 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.167617 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.167640 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.167656 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:54Z","lastTransitionTime":"2025-12-12T15:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.270641 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.270711 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.270734 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.270766 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.270789 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:54Z","lastTransitionTime":"2025-12-12T15:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.356546 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.356646 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:46:54 crc kubenswrapper[4693]: E1212 15:46:54.356755 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.356682 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:54 crc kubenswrapper[4693]: E1212 15:46:54.356885 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:46:54 crc kubenswrapper[4693]: E1212 15:46:54.357048 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.374828 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.374962 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.375037 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.375066 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.375083 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:54Z","lastTransitionTime":"2025-12-12T15:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.477961 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.478013 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.478022 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.478037 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.478048 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:54Z","lastTransitionTime":"2025-12-12T15:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.581055 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.581338 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.581354 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.581377 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.581394 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:54Z","lastTransitionTime":"2025-12-12T15:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.684032 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.684104 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.684219 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.684253 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.684303 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:54Z","lastTransitionTime":"2025-12-12T15:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.787489 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.787572 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.787584 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.787601 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.787613 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:54Z","lastTransitionTime":"2025-12-12T15:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.890864 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.890937 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.891005 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.891041 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.891066 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:54Z","lastTransitionTime":"2025-12-12T15:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.993860 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.993930 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.993954 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.993980 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:54 crc kubenswrapper[4693]: I1212 15:46:54.993997 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:54Z","lastTransitionTime":"2025-12-12T15:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.097103 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.097140 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.097149 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.097370 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.097380 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:55Z","lastTransitionTime":"2025-12-12T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.200493 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.200536 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.200547 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.200563 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.200575 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:55Z","lastTransitionTime":"2025-12-12T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.303503 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.303540 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.303549 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.303563 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.303577 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:55Z","lastTransitionTime":"2025-12-12T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.356064 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:46:55 crc kubenswrapper[4693]: E1212 15:46:55.356250 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.406750 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.406801 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.406816 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.406839 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.406854 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:55Z","lastTransitionTime":"2025-12-12T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.509371 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.509409 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.509420 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.509436 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.509447 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:55Z","lastTransitionTime":"2025-12-12T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.611910 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.611974 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.611997 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.612027 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.612050 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:55Z","lastTransitionTime":"2025-12-12T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.715565 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.715613 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.715626 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.715643 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.715678 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:55Z","lastTransitionTime":"2025-12-12T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.818688 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.818749 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.818763 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.818781 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.818793 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:55Z","lastTransitionTime":"2025-12-12T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.921446 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.921503 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.921514 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.921531 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:55 crc kubenswrapper[4693]: I1212 15:46:55.921540 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:55Z","lastTransitionTime":"2025-12-12T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.023886 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.023920 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.023930 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.023942 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.023950 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:56Z","lastTransitionTime":"2025-12-12T15:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.126927 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.127251 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.127262 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.127302 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.127315 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:56Z","lastTransitionTime":"2025-12-12T15:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.230503 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.230652 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.230680 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.230710 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.230729 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:56Z","lastTransitionTime":"2025-12-12T15:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.334113 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.334185 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.334202 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.334227 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.334243 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:56Z","lastTransitionTime":"2025-12-12T15:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.356580 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.356671 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:46:56 crc kubenswrapper[4693]: E1212 15:46:56.356770 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.356693 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:46:56 crc kubenswrapper[4693]: E1212 15:46:56.356913 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:46:56 crc kubenswrapper[4693]: E1212 15:46:56.356991 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.437176 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.437247 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.437268 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.437330 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.437351 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:56Z","lastTransitionTime":"2025-12-12T15:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.540016 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.540062 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.540074 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.540088 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.540099 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:56Z","lastTransitionTime":"2025-12-12T15:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.642768 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.642840 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.642856 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.642879 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.642895 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:56Z","lastTransitionTime":"2025-12-12T15:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.744669 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.744708 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.744718 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.744732 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.744741 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:56Z","lastTransitionTime":"2025-12-12T15:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.847624 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.847696 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.847733 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.847785 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.847812 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:56Z","lastTransitionTime":"2025-12-12T15:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.951813 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.951857 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.951865 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.951879 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:56 crc kubenswrapper[4693]: I1212 15:46:56.951888 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:56Z","lastTransitionTime":"2025-12-12T15:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.054520 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.054552 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.054561 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.054574 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.054582 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:57Z","lastTransitionTime":"2025-12-12T15:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.157028 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.157067 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.157081 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.157096 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.157108 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:57Z","lastTransitionTime":"2025-12-12T15:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.260224 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.260539 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.260665 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.260783 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.260965 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:57Z","lastTransitionTime":"2025-12-12T15:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.356169 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:46:57 crc kubenswrapper[4693]: E1212 15:46:57.356328 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.363917 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.363964 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.363976 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.363993 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.364009 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:57Z","lastTransitionTime":"2025-12-12T15:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.466346 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.466394 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.466404 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.466417 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.466426 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:57Z","lastTransitionTime":"2025-12-12T15:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.569147 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.569174 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.569181 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.569194 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.569203 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:57Z","lastTransitionTime":"2025-12-12T15:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.672578 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.672633 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.672645 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.672662 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.672673 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:57Z","lastTransitionTime":"2025-12-12T15:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.775924 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.775977 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.775989 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.776006 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.776019 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:57Z","lastTransitionTime":"2025-12-12T15:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.878867 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.878916 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.878937 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.878959 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.878977 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:57Z","lastTransitionTime":"2025-12-12T15:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.982240 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.982329 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.982347 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.982370 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:57 crc kubenswrapper[4693]: I1212 15:46:57.982387 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:57Z","lastTransitionTime":"2025-12-12T15:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.053592 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs\") pod \"network-metrics-daemon-w4zs6\" (UID: \"6ef3804b-c2b3-4645-b60f-9bc977a89f69\") " pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:46:58 crc kubenswrapper[4693]: E1212 15:46:58.053821 4693 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:46:58 crc kubenswrapper[4693]: E1212 15:46:58.053943 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs podName:6ef3804b-c2b3-4645-b60f-9bc977a89f69 nodeName:}" failed. No retries permitted until 2025-12-12 15:47:06.053914328 +0000 UTC m=+53.222553969 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs") pod "network-metrics-daemon-w4zs6" (UID: "6ef3804b-c2b3-4645-b60f-9bc977a89f69") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.085414 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.085474 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.085485 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.085512 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.085531 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:58Z","lastTransitionTime":"2025-12-12T15:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.189021 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.189076 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.189091 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.189109 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.189121 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:58Z","lastTransitionTime":"2025-12-12T15:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.292433 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.292502 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.292521 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.292548 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.292566 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:58Z","lastTransitionTime":"2025-12-12T15:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.356734 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.356854 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:46:58 crc kubenswrapper[4693]: E1212 15:46:58.356975 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.357026 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:46:58 crc kubenswrapper[4693]: E1212 15:46:58.357119 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:46:58 crc kubenswrapper[4693]: E1212 15:46:58.357183 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.395572 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.395607 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.395614 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.395627 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.395636 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:58Z","lastTransitionTime":"2025-12-12T15:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.498062 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.498101 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.498110 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.498124 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.498133 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:58Z","lastTransitionTime":"2025-12-12T15:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.566961 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.567973 4693 scope.go:117] "RemoveContainer" containerID="a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c" Dec 12 15:46:58 crc kubenswrapper[4693]: E1212 15:46:58.568237 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.600637 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.600670 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.600678 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.600690 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.600699 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:58Z","lastTransitionTime":"2025-12-12T15:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.703099 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.703154 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.703173 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.703199 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.703216 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:58Z","lastTransitionTime":"2025-12-12T15:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.806197 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.806246 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.806264 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.806326 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.806347 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:58Z","lastTransitionTime":"2025-12-12T15:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.909091 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.909148 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.909159 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.909173 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:58 crc kubenswrapper[4693]: I1212 15:46:58.909182 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:58Z","lastTransitionTime":"2025-12-12T15:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.011840 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.011895 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.011906 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.011921 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.011936 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:59Z","lastTransitionTime":"2025-12-12T15:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.114797 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.114868 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.114887 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.114909 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.114925 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:59Z","lastTransitionTime":"2025-12-12T15:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.218353 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.218415 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.218433 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.218455 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.218467 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:59Z","lastTransitionTime":"2025-12-12T15:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.321083 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.321139 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.321153 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.321170 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.321183 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:59Z","lastTransitionTime":"2025-12-12T15:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.356891 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:46:59 crc kubenswrapper[4693]: E1212 15:46:59.357038 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.424741 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.424810 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.424826 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.424849 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.424867 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:59Z","lastTransitionTime":"2025-12-12T15:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.527995 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.528334 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.528433 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.528527 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.528635 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:59Z","lastTransitionTime":"2025-12-12T15:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.631212 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.631247 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.631255 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.631283 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.631292 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:59Z","lastTransitionTime":"2025-12-12T15:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.733329 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.733372 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.733386 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.733402 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.733413 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:59Z","lastTransitionTime":"2025-12-12T15:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.836455 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.836536 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.836553 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.836570 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.836582 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:59Z","lastTransitionTime":"2025-12-12T15:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.939221 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.939363 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.939398 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.939421 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:46:59 crc kubenswrapper[4693]: I1212 15:46:59.939437 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:46:59Z","lastTransitionTime":"2025-12-12T15:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.041641 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.041695 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.041707 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.041725 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.041739 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:00Z","lastTransitionTime":"2025-12-12T15:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.144517 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.144566 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.144577 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.144595 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.144608 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:00Z","lastTransitionTime":"2025-12-12T15:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.246656 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.246699 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.246708 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.246722 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.246731 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:00Z","lastTransitionTime":"2025-12-12T15:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.349409 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.349457 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.349468 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.349484 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.349495 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:00Z","lastTransitionTime":"2025-12-12T15:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.356770 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:00 crc kubenswrapper[4693]: E1212 15:47:00.356976 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.356777 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:00 crc kubenswrapper[4693]: E1212 15:47:00.357067 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.356780 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:00 crc kubenswrapper[4693]: E1212 15:47:00.357140 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.451964 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.452018 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.452034 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.452057 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.452071 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:00Z","lastTransitionTime":"2025-12-12T15:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.554617 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.554662 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.554673 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.554689 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.554700 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:00Z","lastTransitionTime":"2025-12-12T15:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.657597 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.657631 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.657639 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.657652 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.657662 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:00Z","lastTransitionTime":"2025-12-12T15:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.760484 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.760536 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.760546 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.760559 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.760569 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:00Z","lastTransitionTime":"2025-12-12T15:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.863471 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.863539 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.863553 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.863572 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.863586 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:00Z","lastTransitionTime":"2025-12-12T15:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.966985 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.967044 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.967055 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.967077 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:00 crc kubenswrapper[4693]: I1212 15:47:00.967092 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:00Z","lastTransitionTime":"2025-12-12T15:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.071097 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.071172 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.071186 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.071224 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.071242 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:01Z","lastTransitionTime":"2025-12-12T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.174134 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.174203 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.174216 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.174243 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.174257 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:01Z","lastTransitionTime":"2025-12-12T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.277358 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.277435 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.277452 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.277470 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.277483 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:01Z","lastTransitionTime":"2025-12-12T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.356679 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:01 crc kubenswrapper[4693]: E1212 15:47:01.356804 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.379992 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.380049 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.380073 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.380103 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.380129 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:01Z","lastTransitionTime":"2025-12-12T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.483079 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.483118 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.483127 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.483146 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.483155 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:01Z","lastTransitionTime":"2025-12-12T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.503214 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.503255 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.503285 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.503301 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.503313 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:01Z","lastTransitionTime":"2025-12-12T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:01 crc kubenswrapper[4693]: E1212 15:47:01.524110 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:01Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.534812 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.534859 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.534872 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.534887 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.534896 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:01Z","lastTransitionTime":"2025-12-12T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:01 crc kubenswrapper[4693]: E1212 15:47:01.556749 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:01Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.562117 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.562185 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.562198 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.562215 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.562228 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:01Z","lastTransitionTime":"2025-12-12T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:01 crc kubenswrapper[4693]: E1212 15:47:01.579189 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:01Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.584976 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.585017 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.585034 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.585056 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.585068 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:01Z","lastTransitionTime":"2025-12-12T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:01 crc kubenswrapper[4693]: E1212 15:47:01.600998 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:01Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.604994 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.605051 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.605069 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.605092 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.605107 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:01Z","lastTransitionTime":"2025-12-12T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:01 crc kubenswrapper[4693]: E1212 15:47:01.618457 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:01Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:01 crc kubenswrapper[4693]: E1212 15:47:01.618621 4693 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.620493 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.620545 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.620560 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.620586 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.620605 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:01Z","lastTransitionTime":"2025-12-12T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.722923 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.722986 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.722997 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.723013 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.723024 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:01Z","lastTransitionTime":"2025-12-12T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.825848 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.825912 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.825926 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.825952 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.825968 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:01Z","lastTransitionTime":"2025-12-12T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.931050 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.931087 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.931095 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.931109 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:01 crc kubenswrapper[4693]: I1212 15:47:01.931120 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:01Z","lastTransitionTime":"2025-12-12T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.034110 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.034164 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.034174 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.034191 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.034202 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:02Z","lastTransitionTime":"2025-12-12T15:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.137410 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.137519 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.137537 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.137564 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.137581 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:02Z","lastTransitionTime":"2025-12-12T15:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.240688 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.240724 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.240732 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.240747 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.240757 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:02Z","lastTransitionTime":"2025-12-12T15:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.343478 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.343535 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.343563 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.343588 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.343608 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:02Z","lastTransitionTime":"2025-12-12T15:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.356792 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.356877 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.356814 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:02 crc kubenswrapper[4693]: E1212 15:47:02.356953 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:02 crc kubenswrapper[4693]: E1212 15:47:02.357036 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:02 crc kubenswrapper[4693]: E1212 15:47:02.357216 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.446459 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.446529 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.446551 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.446571 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.446586 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:02Z","lastTransitionTime":"2025-12-12T15:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.550024 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.550093 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.550104 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.550127 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.550138 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:02Z","lastTransitionTime":"2025-12-12T15:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.653759 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.653815 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.653826 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.653848 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.653866 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:02Z","lastTransitionTime":"2025-12-12T15:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.757012 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.757055 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.757063 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.757078 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.757088 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:02Z","lastTransitionTime":"2025-12-12T15:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.859099 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.859145 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.859156 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.859173 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.859187 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:02Z","lastTransitionTime":"2025-12-12T15:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.962434 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.962524 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.962553 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.962590 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:02 crc kubenswrapper[4693]: I1212 15:47:02.962625 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:02Z","lastTransitionTime":"2025-12-12T15:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.066320 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.066373 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.066390 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.066415 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.066431 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:03Z","lastTransitionTime":"2025-12-12T15:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.169409 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.169469 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.169491 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.169518 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.169541 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:03Z","lastTransitionTime":"2025-12-12T15:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.271821 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.271892 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.271918 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.271949 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.271975 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:03Z","lastTransitionTime":"2025-12-12T15:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.357207 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:03 crc kubenswrapper[4693]: E1212 15:47:03.357513 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.376443 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.376523 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.376544 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.376577 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.376598 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:03Z","lastTransitionTime":"2025-12-12T15:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.384999 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.402501 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w4zs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef3804b-c2b3-4645-b60f-9bc977a89f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w4zs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.417741 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.431643 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.446932 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.462913 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.474881 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.478882 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.478928 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.478942 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.478965 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.478979 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:03Z","lastTransitionTime":"2025-12-12T15:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.490511 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.505370 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.523886 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.548977 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"message\\\":\\\"v openshift-network-node-identity/network-node-identity-vrzqb]\\\\nI1212 15:46:48.971982 6135 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1212 15:46:48.971865 6135 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z]\\\\nI1212 15:46:48.971989 6135 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.562122 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.575789 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.580683 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.580705 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.580712 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.580725 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.580734 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:03Z","lastTransitionTime":"2025-12-12T15:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.587432 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.602679 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.615102 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0124f2-8890-495e-919d-da02af9ecd6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6ee772252ca6daf992f916cf2f4fba993106d436c8a192a37b1cf81080c5342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ad52957967efb3497de12a094e81ca9ffc7fc6fb88705e9d16ac22319711e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjdt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.683205 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.683234 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.683243 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.683257 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.683266 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:03Z","lastTransitionTime":"2025-12-12T15:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.786066 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.786138 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.786168 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.786201 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.786223 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:03Z","lastTransitionTime":"2025-12-12T15:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.890093 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.890190 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.890210 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.890235 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.890252 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:03Z","lastTransitionTime":"2025-12-12T15:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.994118 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.994189 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.994209 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.994236 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:03 crc kubenswrapper[4693]: I1212 15:47:03.994256 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:03Z","lastTransitionTime":"2025-12-12T15:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.097361 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.097741 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.097820 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.097896 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.097976 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:04Z","lastTransitionTime":"2025-12-12T15:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.200402 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.200961 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.201166 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.201264 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.201366 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:04Z","lastTransitionTime":"2025-12-12T15:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.309144 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.309247 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.309375 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.310073 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.310087 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:04Z","lastTransitionTime":"2025-12-12T15:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.356723 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.356781 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.356871 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:04 crc kubenswrapper[4693]: E1212 15:47:04.357061 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:04 crc kubenswrapper[4693]: E1212 15:47:04.357169 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:04 crc kubenswrapper[4693]: E1212 15:47:04.357233 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.413005 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.413046 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.413059 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.413076 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.413088 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:04Z","lastTransitionTime":"2025-12-12T15:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.515737 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.515773 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.515782 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.515794 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.515803 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:04Z","lastTransitionTime":"2025-12-12T15:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.619545 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.619612 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.619624 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.619645 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.619660 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:04Z","lastTransitionTime":"2025-12-12T15:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.721302 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.721338 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.721346 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.721361 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.721369 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:04Z","lastTransitionTime":"2025-12-12T15:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.824345 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.824703 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.824814 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.824914 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.825014 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:04Z","lastTransitionTime":"2025-12-12T15:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.928339 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.928376 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.928385 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.928403 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:04 crc kubenswrapper[4693]: I1212 15:47:04.928413 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:04Z","lastTransitionTime":"2025-12-12T15:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.031245 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.031340 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.031355 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.031375 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.031388 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:05Z","lastTransitionTime":"2025-12-12T15:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.134969 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.135040 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.135053 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.135074 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.135090 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:05Z","lastTransitionTime":"2025-12-12T15:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.237983 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.238056 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.238071 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.238093 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.238106 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:05Z","lastTransitionTime":"2025-12-12T15:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.341265 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.341352 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.341366 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.341392 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.341406 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:05Z","lastTransitionTime":"2025-12-12T15:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.356810 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:05 crc kubenswrapper[4693]: E1212 15:47:05.356991 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.445094 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.445146 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.445155 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.445170 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.445180 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:05Z","lastTransitionTime":"2025-12-12T15:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.548956 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.549056 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.549069 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.549087 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.549099 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:05Z","lastTransitionTime":"2025-12-12T15:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.652106 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.652154 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.652168 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.652192 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.652207 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:05Z","lastTransitionTime":"2025-12-12T15:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.754845 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.754902 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.754913 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.754935 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.754948 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:05Z","lastTransitionTime":"2025-12-12T15:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.858978 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.859064 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.859085 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.859108 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.859119 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:05Z","lastTransitionTime":"2025-12-12T15:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.961971 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.962034 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.962049 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.962072 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:05 crc kubenswrapper[4693]: I1212 15:47:05.962086 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:05Z","lastTransitionTime":"2025-12-12T15:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.065298 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.065347 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.065359 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.065376 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.065387 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:06Z","lastTransitionTime":"2025-12-12T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.142134 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs\") pod \"network-metrics-daemon-w4zs6\" (UID: \"6ef3804b-c2b3-4645-b60f-9bc977a89f69\") " pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:06 crc kubenswrapper[4693]: E1212 15:47:06.142377 4693 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:47:06 crc kubenswrapper[4693]: E1212 15:47:06.142455 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs podName:6ef3804b-c2b3-4645-b60f-9bc977a89f69 nodeName:}" failed. No retries permitted until 2025-12-12 15:47:22.142433077 +0000 UTC m=+69.311072668 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs") pod "network-metrics-daemon-w4zs6" (UID: "6ef3804b-c2b3-4645-b60f-9bc977a89f69") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.167669 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.167711 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.167725 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.167746 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.167758 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:06Z","lastTransitionTime":"2025-12-12T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.271054 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.271106 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.271119 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.271139 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.271150 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:06Z","lastTransitionTime":"2025-12-12T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.356505 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.356563 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.356592 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:06 crc kubenswrapper[4693]: E1212 15:47:06.356800 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:06 crc kubenswrapper[4693]: E1212 15:47:06.356781 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:06 crc kubenswrapper[4693]: E1212 15:47:06.356921 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.374329 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.374392 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.374405 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.374426 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.374441 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:06Z","lastTransitionTime":"2025-12-12T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.477938 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.478010 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.478022 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.478045 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.478069 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:06Z","lastTransitionTime":"2025-12-12T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.580192 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.580255 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.580283 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.580300 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.580310 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:06Z","lastTransitionTime":"2025-12-12T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.683324 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.683369 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.683378 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.683392 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.683404 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:06Z","lastTransitionTime":"2025-12-12T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.786546 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.786606 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.786615 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.786631 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.786641 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:06Z","lastTransitionTime":"2025-12-12T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.890373 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.890437 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.890451 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.890471 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.890481 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:06Z","lastTransitionTime":"2025-12-12T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.993512 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.993544 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.993553 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.993566 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:06 crc kubenswrapper[4693]: I1212 15:47:06.993575 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:06Z","lastTransitionTime":"2025-12-12T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.051657 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.051756 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.051810 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:07 crc kubenswrapper[4693]: E1212 15:47:07.051855 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:47:39.051829813 +0000 UTC m=+86.220469414 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:47:07 crc kubenswrapper[4693]: E1212 15:47:07.051895 4693 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.051899 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:07 crc kubenswrapper[4693]: E1212 15:47:07.051947 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 15:47:39.051931286 +0000 UTC m=+86.220570947 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:47:07 crc kubenswrapper[4693]: E1212 15:47:07.051982 4693 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:47:07 crc kubenswrapper[4693]: E1212 15:47:07.051996 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:47:07 crc kubenswrapper[4693]: E1212 15:47:07.052012 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 15:47:39.052005258 +0000 UTC m=+86.220644859 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.051982 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:07 crc kubenswrapper[4693]: E1212 15:47:07.052020 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:47:07 crc kubenswrapper[4693]: E1212 15:47:07.052033 4693 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:47:07 crc kubenswrapper[4693]: E1212 15:47:07.052058 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:47:07 crc kubenswrapper[4693]: E1212 15:47:07.052071 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 15:47:39.052057589 +0000 UTC m=+86.220697240 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:47:07 crc kubenswrapper[4693]: E1212 15:47:07.052074 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:47:07 crc kubenswrapper[4693]: E1212 15:47:07.052091 4693 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:47:07 crc kubenswrapper[4693]: E1212 15:47:07.052121 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 15:47:39.052111781 +0000 UTC m=+86.220751462 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.096267 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.096342 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.096355 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.096373 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.096387 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:07Z","lastTransitionTime":"2025-12-12T15:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.198495 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.198529 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.198728 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.198744 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.198755 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:07Z","lastTransitionTime":"2025-12-12T15:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.301524 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.301574 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.301587 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.301608 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.301623 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:07Z","lastTransitionTime":"2025-12-12T15:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.356995 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:07 crc kubenswrapper[4693]: E1212 15:47:07.357183 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.405045 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.405093 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.405105 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.405121 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.405133 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:07Z","lastTransitionTime":"2025-12-12T15:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.508351 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.508406 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.508419 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.508443 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.508455 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:07Z","lastTransitionTime":"2025-12-12T15:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.611305 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.611396 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.611416 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.611443 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.611460 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:07Z","lastTransitionTime":"2025-12-12T15:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.715213 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.715371 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.715395 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.715427 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.715445 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:07Z","lastTransitionTime":"2025-12-12T15:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.818252 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.818338 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.818348 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.818370 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.818382 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:07Z","lastTransitionTime":"2025-12-12T15:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.920628 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.920684 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.920701 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.920718 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:07 crc kubenswrapper[4693]: I1212 15:47:07.920730 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:07Z","lastTransitionTime":"2025-12-12T15:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.022950 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.023002 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.023017 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.023038 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.023054 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:08Z","lastTransitionTime":"2025-12-12T15:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.125336 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.125391 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.125401 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.125414 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.125423 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:08Z","lastTransitionTime":"2025-12-12T15:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.228083 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.228138 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.228148 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.228165 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.228177 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:08Z","lastTransitionTime":"2025-12-12T15:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.331470 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.331534 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.331548 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.331571 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.331585 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:08Z","lastTransitionTime":"2025-12-12T15:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.356909 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.357035 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.357159 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:08 crc kubenswrapper[4693]: E1212 15:47:08.357344 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:08 crc kubenswrapper[4693]: E1212 15:47:08.357463 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:08 crc kubenswrapper[4693]: E1212 15:47:08.357566 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.434889 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.434949 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.434964 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.434982 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.434994 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:08Z","lastTransitionTime":"2025-12-12T15:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.538369 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.538419 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.538428 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.538448 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.538463 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:08Z","lastTransitionTime":"2025-12-12T15:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.641833 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.641888 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.641906 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.641929 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.641945 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:08Z","lastTransitionTime":"2025-12-12T15:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.744144 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.744196 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.744214 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.744239 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.744252 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:08Z","lastTransitionTime":"2025-12-12T15:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.847858 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.847917 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.847931 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.847951 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.847964 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:08Z","lastTransitionTime":"2025-12-12T15:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.950216 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.950325 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.950341 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.950359 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:08 crc kubenswrapper[4693]: I1212 15:47:08.950371 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:08Z","lastTransitionTime":"2025-12-12T15:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.053348 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.053409 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.053425 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.053447 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.053464 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:09Z","lastTransitionTime":"2025-12-12T15:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.155684 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.155724 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.155741 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.155758 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.155768 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:09Z","lastTransitionTime":"2025-12-12T15:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.258600 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.258624 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.258632 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.258645 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.258653 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:09Z","lastTransitionTime":"2025-12-12T15:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.356976 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:09 crc kubenswrapper[4693]: E1212 15:47:09.357254 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.361545 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.361605 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.361627 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.361654 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.361674 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:09Z","lastTransitionTime":"2025-12-12T15:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.464257 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.464331 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.464344 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.464362 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.464373 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:09Z","lastTransitionTime":"2025-12-12T15:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.567028 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.567081 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.567092 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.567108 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.567120 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:09Z","lastTransitionTime":"2025-12-12T15:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.670457 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.670503 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.670512 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.670527 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.670537 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:09Z","lastTransitionTime":"2025-12-12T15:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.773325 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.773361 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.773370 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.773385 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.773394 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:09Z","lastTransitionTime":"2025-12-12T15:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.876060 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.876105 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.876116 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.876133 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.876144 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:09Z","lastTransitionTime":"2025-12-12T15:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.978342 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.978388 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.978403 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.978419 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:09 crc kubenswrapper[4693]: I1212 15:47:09.978428 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:09Z","lastTransitionTime":"2025-12-12T15:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.080287 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.080327 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.080339 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.080351 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.080361 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:10Z","lastTransitionTime":"2025-12-12T15:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.183069 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.183109 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.183117 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.183130 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.183139 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:10Z","lastTransitionTime":"2025-12-12T15:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.285765 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.285819 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.285829 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.285844 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.285852 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:10Z","lastTransitionTime":"2025-12-12T15:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.356591 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.356641 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.356692 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:10 crc kubenswrapper[4693]: E1212 15:47:10.356754 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:10 crc kubenswrapper[4693]: E1212 15:47:10.356857 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:10 crc kubenswrapper[4693]: E1212 15:47:10.357441 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.357974 4693 scope.go:117] "RemoveContainer" containerID="a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.388542 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.388873 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.388882 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.388895 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.388904 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:10Z","lastTransitionTime":"2025-12-12T15:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.490569 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.490619 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.490630 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.490671 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.490683 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:10Z","lastTransitionTime":"2025-12-12T15:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.593684 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.593752 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.593768 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.593789 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.593802 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:10Z","lastTransitionTime":"2025-12-12T15:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.696643 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.696693 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.696703 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.696721 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.696732 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:10Z","lastTransitionTime":"2025-12-12T15:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.754116 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ps9gt_fa7eae7d-b662-434d-96c1-de3080d579bd/ovnkube-controller/1.log" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.757113 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerStarted","Data":"b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953"} Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.757590 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.776512 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:10Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.797457 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w4zs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef3804b-c2b3-4645-b60f-9bc977a89f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w4zs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:10Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.798640 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.798680 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.798690 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.798704 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.798714 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:10Z","lastTransitionTime":"2025-12-12T15:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.815818 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:10Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.820473 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.838214 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.842164 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:10Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.855922 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:10Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.868103 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:10Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.880686 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:10Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.897754 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:10Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.901556 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.901612 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.901625 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.901644 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.901656 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:10Z","lastTransitionTime":"2025-12-12T15:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.910922 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:10Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.930289 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"message\\\":\\\"v openshift-network-node-identity/network-node-identity-vrzqb]\\\\nI1212 15:46:48.971982 6135 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1212 15:46:48.971865 6135 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z]\\\\nI1212 15:46:48.971989 6135 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:10Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.942080 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:10Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.956260 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:10Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.973435 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:10Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.985356 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:10Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:10 crc kubenswrapper[4693]: I1212 15:47:10.998697 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:10Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.004206 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.004251 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.004265 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.004329 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.004344 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:11Z","lastTransitionTime":"2025-12-12T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.011149 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0124f2-8890-495e-919d-da02af9ecd6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6ee772252ca6daf992f916cf2f4fba993106d436c8a192a37b1cf81080c5342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ad52957967efb3497de12a094e81ca9ffc7fc6fb88705e9d16ac22319711e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjdt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.025063 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.036906 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w4zs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef3804b-c2b3-4645-b60f-9bc977a89f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w4zs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.058990 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.078058 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.099736 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"message\\\":\\\"v openshift-network-node-identity/network-node-identity-vrzqb]\\\\nI1212 15:46:48.971982 6135 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1212 15:46:48.971865 6135 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z]\\\\nI1212 15:46:48.971989 6135 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.106855 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.106884 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.106896 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.106913 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.106925 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:11Z","lastTransitionTime":"2025-12-12T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.113924 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.132614 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.144148 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.156609 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.169120 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.184369 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.197714 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46c78f0b-1016-48e4-b183-e70a6e692146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9141897abf18bfa9aa4d537e0e117efd7eeb1137e4f4eb0aeb4d68ed07430ff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc30784ce0860622be7856d80caddb1a7f8c510518a0d7dc647eba7bb3671c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36b5280d53c4c3a10ab04273c8f2c02d7118b49f7bcf33eaada7891585e396d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.208478 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.208509 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.208518 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.208531 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.208541 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:11Z","lastTransitionTime":"2025-12-12T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.211597 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.226900 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.238979 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.260396 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.274516 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0124f2-8890-495e-919d-da02af9ecd6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6ee772252ca6daf992f916cf2f4fba993106d436c8a192a37b1cf81080c5342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ad52957967efb3497de12a094e81ca9ffc7fc6fb88705e9d16ac22319711e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjdt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.311231 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.311280 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.311305 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.311325 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.311336 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:11Z","lastTransitionTime":"2025-12-12T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.356951 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:11 crc kubenswrapper[4693]: E1212 15:47:11.357152 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.414176 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.414225 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.414236 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.414249 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.414258 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:11Z","lastTransitionTime":"2025-12-12T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.517083 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.517138 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.517153 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.517174 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.517190 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:11Z","lastTransitionTime":"2025-12-12T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.620303 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.620375 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.620424 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.620452 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.620473 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:11Z","lastTransitionTime":"2025-12-12T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.723372 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.723430 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.723445 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.723476 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.723496 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:11Z","lastTransitionTime":"2025-12-12T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.762158 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ps9gt_fa7eae7d-b662-434d-96c1-de3080d579bd/ovnkube-controller/2.log" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.762831 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ps9gt_fa7eae7d-b662-434d-96c1-de3080d579bd/ovnkube-controller/1.log" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.766197 4693 generic.go:334] "Generic (PLEG): container finished" podID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerID="b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953" exitCode=1 Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.766335 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerDied","Data":"b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953"} Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.766427 4693 scope.go:117] "RemoveContainer" containerID="a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.767413 4693 scope.go:117] "RemoveContainer" containerID="b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953" Dec 12 15:47:11 crc kubenswrapper[4693]: E1212 15:47:11.767674 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.788435 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.799789 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0124f2-8890-495e-919d-da02af9ecd6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6ee772252ca6daf992f916cf2f4fba993106d436c8a192a37b1cf81080c5342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ad52957967efb3497de12a094e81ca9ffc7fc6fb88705e9d16ac22319711e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjdt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.812884 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46c78f0b-1016-48e4-b183-e70a6e692146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9141897abf18bfa9aa4d537e0e117efd7eeb1137e4f4eb0aeb4d68ed07430ff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc30784ce0860622be7856d80caddb1a7f8c510518a0d7dc647eba7bb3671c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36b5280d53c4c3a10ab04273c8f2c02d7118b49f7bcf33eaada7891585e396d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.825630 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.825875 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.825889 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.825907 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.825920 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:11Z","lastTransitionTime":"2025-12-12T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.829395 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.847328 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.847420 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.847457 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.847490 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.847506 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.847515 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:11Z","lastTransitionTime":"2025-12-12T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.858433 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: E1212 15:47:11.859784 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.863331 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.863379 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.863390 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.863405 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.863418 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:11Z","lastTransitionTime":"2025-12-12T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.871791 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: E1212 15:47:11.875446 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.878464 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.878514 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.878532 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.878550 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.878565 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:11Z","lastTransitionTime":"2025-12-12T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.882258 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w4zs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef3804b-c2b3-4645-b60f-9bc977a89f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w4zs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: E1212 15:47:11.889983 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.893029 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.893696 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.893733 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.893745 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.893765 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.893776 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:11Z","lastTransitionTime":"2025-12-12T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:11 crc kubenswrapper[4693]: E1212 15:47:11.903738 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.906473 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.910498 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.910531 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.910539 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.910552 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.910574 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:11Z","lastTransitionTime":"2025-12-12T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.920663 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: E1212 15:47:11.921678 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: E1212 15:47:11.921800 4693 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.927596 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.927637 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.927649 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.927664 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.927675 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:11Z","lastTransitionTime":"2025-12-12T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.936978 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.965796 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a31752fd5cd6afb7bda0f2c2f34bf870d61c1283fd56833529e170046504cf7c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"message\\\":\\\"v openshift-network-node-identity/network-node-identity-vrzqb]\\\\nI1212 15:46:48.971982 6135 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1212 15:46:48.971865 6135 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:46:48Z is after 2025-08-24T17:21:41Z]\\\\nI1212 15:46:48.971989 6135 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1212 15:47:11.359001 6402 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1212 15:47:11.359022 6402 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.977804 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:11 crc kubenswrapper[4693]: I1212 15:47:11.990273 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.001698 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:11Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.012339 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:12Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.029823 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.029870 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.029881 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.029898 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.029910 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:12Z","lastTransitionTime":"2025-12-12T15:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.132670 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.132734 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.132751 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.132831 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.132850 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:12Z","lastTransitionTime":"2025-12-12T15:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.235744 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.235801 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.235816 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.235835 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.235850 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:12Z","lastTransitionTime":"2025-12-12T15:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.338396 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.338438 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.338452 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.338471 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.338485 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:12Z","lastTransitionTime":"2025-12-12T15:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.356877 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.356925 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.356962 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:12 crc kubenswrapper[4693]: E1212 15:47:12.357071 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:12 crc kubenswrapper[4693]: E1212 15:47:12.357175 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:12 crc kubenswrapper[4693]: E1212 15:47:12.357397 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.441153 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.441192 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.441206 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.441223 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.441236 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:12Z","lastTransitionTime":"2025-12-12T15:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.544763 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.544835 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.544863 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.544898 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.544921 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:12Z","lastTransitionTime":"2025-12-12T15:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.647869 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.647927 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.647949 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.647971 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.647986 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:12Z","lastTransitionTime":"2025-12-12T15:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.751765 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.751812 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.751828 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.751858 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.751879 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:12Z","lastTransitionTime":"2025-12-12T15:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.772630 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ps9gt_fa7eae7d-b662-434d-96c1-de3080d579bd/ovnkube-controller/2.log" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.778165 4693 scope.go:117] "RemoveContainer" containerID="b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953" Dec 12 15:47:12 crc kubenswrapper[4693]: E1212 15:47:12.778692 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.796759 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:12Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.811950 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:12Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.826367 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:12Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.838058 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:12Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.851613 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:12Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.856850 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.856912 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.856925 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.856940 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.856952 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:12Z","lastTransitionTime":"2025-12-12T15:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.878568 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1212 15:47:11.359001 6402 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1212 15:47:11.359022 6402 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:47:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:12Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.893703 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:12Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.908654 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46c78f0b-1016-48e4-b183-e70a6e692146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9141897abf18bfa9aa4d537e0e117efd7eeb1137e4f4eb0aeb4d68ed07430ff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc30784ce0860622be7856d80caddb1a7f8c510518a0d7dc647eba7bb3671c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36b5280d53c4c3a10ab04273c8f2c02d7118b49f7bcf33eaada7891585e396d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:12Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.928039 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:12Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.945692 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:12Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.960557 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.960610 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.960622 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.960640 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.960654 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:12Z","lastTransitionTime":"2025-12-12T15:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.962511 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:12Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:12 crc kubenswrapper[4693]: I1212 15:47:12.985670 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:12Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.001921 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0124f2-8890-495e-919d-da02af9ecd6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6ee772252ca6daf992f916cf2f4fba993106d436c8a192a37b1cf81080c5342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ad52957967efb3497de12a094e81ca9ffc7fc6fb88705e9d16ac22319711e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjdt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:12Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.017502 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:13Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.031920 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w4zs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef3804b-c2b3-4645-b60f-9bc977a89f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w4zs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:13Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.046051 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:13Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.059087 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:13Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.063438 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.063497 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.063515 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.063538 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.063552 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:13Z","lastTransitionTime":"2025-12-12T15:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.166498 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.166568 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.166581 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.166640 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.166654 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:13Z","lastTransitionTime":"2025-12-12T15:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.269596 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.269660 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.269678 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.269702 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.269720 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:13Z","lastTransitionTime":"2025-12-12T15:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.357127 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:13 crc kubenswrapper[4693]: E1212 15:47:13.358024 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.372823 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.372974 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.373038 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.373717 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.373746 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:13Z","lastTransitionTime":"2025-12-12T15:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.375003 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46c78f0b-1016-48e4-b183-e70a6e692146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9141897abf18bfa9aa4d537e0e117efd7eeb1137e4f4eb0aeb4d68ed07430ff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc30784ce0860622be7856d80caddb1a7f8c510518a0d7dc647eba7bb3671c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36b5280d53c4c3a10ab04273c8f2c02d7118b49f7bcf33eaada7891585e396d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:13Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.396972 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:13Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.451442 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:13Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.465688 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:13Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.476055 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.476095 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.476105 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.476124 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.476135 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:13Z","lastTransitionTime":"2025-12-12T15:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.484005 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:13Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.498822 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0124f2-8890-495e-919d-da02af9ecd6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6ee772252ca6daf992f916cf2f4fba993106d436c8a192a37b1cf81080c5342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ad52957967efb3497de12a094e81ca9ffc7fc6fb88705e9d16ac22319711e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjdt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:13Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.512223 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:13Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.522750 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w4zs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef3804b-c2b3-4645-b60f-9bc977a89f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w4zs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:13Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.534683 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:13Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.545171 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:13Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.561161 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1212 15:47:11.359001 6402 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1212 15:47:11.359022 6402 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:47:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:13Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.570397 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:13Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.578422 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.578492 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.578513 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.578537 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.578557 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:13Z","lastTransitionTime":"2025-12-12T15:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.581854 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:13Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.591151 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:13Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.603168 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:13Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.613014 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:13Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.624601 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:13Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.681988 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.682093 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.682132 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.682164 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.682182 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:13Z","lastTransitionTime":"2025-12-12T15:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.784225 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.784263 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.784294 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.784311 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.784322 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:13Z","lastTransitionTime":"2025-12-12T15:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.886908 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.886969 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.886986 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.887011 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.887028 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:13Z","lastTransitionTime":"2025-12-12T15:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.990135 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.990183 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.990195 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.990209 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:13 crc kubenswrapper[4693]: I1212 15:47:13.990217 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:13Z","lastTransitionTime":"2025-12-12T15:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.093154 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.093188 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.093196 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.093208 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.093232 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:14Z","lastTransitionTime":"2025-12-12T15:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.195504 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.195542 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.195551 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.195567 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.195576 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:14Z","lastTransitionTime":"2025-12-12T15:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.298388 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.298473 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.298490 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.298510 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.298526 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:14Z","lastTransitionTime":"2025-12-12T15:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.356732 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.356780 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.356859 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:14 crc kubenswrapper[4693]: E1212 15:47:14.356984 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:14 crc kubenswrapper[4693]: E1212 15:47:14.357091 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:14 crc kubenswrapper[4693]: E1212 15:47:14.357187 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.400795 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.400828 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.400839 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.400854 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.400864 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:14Z","lastTransitionTime":"2025-12-12T15:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.503745 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.503799 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.503815 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.503838 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.503856 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:14Z","lastTransitionTime":"2025-12-12T15:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.611464 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.611516 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.611529 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.611562 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.611574 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:14Z","lastTransitionTime":"2025-12-12T15:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.714395 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.714438 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.714447 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.714462 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.714473 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:14Z","lastTransitionTime":"2025-12-12T15:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.816126 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.816165 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.816173 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.816188 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.816198 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:14Z","lastTransitionTime":"2025-12-12T15:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.918577 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.918635 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.918648 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.918663 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:14 crc kubenswrapper[4693]: I1212 15:47:14.918675 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:14Z","lastTransitionTime":"2025-12-12T15:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.022090 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.022859 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.022900 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.022927 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.022942 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:15Z","lastTransitionTime":"2025-12-12T15:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.125059 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.125108 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.125120 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.125136 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.125167 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:15Z","lastTransitionTime":"2025-12-12T15:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.227571 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.227622 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.227635 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.227661 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.227674 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:15Z","lastTransitionTime":"2025-12-12T15:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.330639 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.330681 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.330692 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.330710 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.330722 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:15Z","lastTransitionTime":"2025-12-12T15:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.358719 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:15 crc kubenswrapper[4693]: E1212 15:47:15.358907 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.433366 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.433447 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.433456 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.433470 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.433478 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:15Z","lastTransitionTime":"2025-12-12T15:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.536262 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.536418 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.536446 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.536522 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.536546 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:15Z","lastTransitionTime":"2025-12-12T15:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.639612 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.639677 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.639699 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.639726 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.639748 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:15Z","lastTransitionTime":"2025-12-12T15:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.741875 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.741934 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.741949 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.741969 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.741986 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:15Z","lastTransitionTime":"2025-12-12T15:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.844718 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.844764 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.844771 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.844785 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.844796 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:15Z","lastTransitionTime":"2025-12-12T15:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.947796 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.947847 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.947863 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.947885 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:15 crc kubenswrapper[4693]: I1212 15:47:15.947901 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:15Z","lastTransitionTime":"2025-12-12T15:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.050621 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.050660 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.050671 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.050686 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.050698 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:16Z","lastTransitionTime":"2025-12-12T15:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.153386 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.153757 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.153784 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.153814 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.153835 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:16Z","lastTransitionTime":"2025-12-12T15:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.257213 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.257269 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.257304 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.257321 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.257333 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:16Z","lastTransitionTime":"2025-12-12T15:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.356183 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.356248 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.356322 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:16 crc kubenswrapper[4693]: E1212 15:47:16.356358 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:16 crc kubenswrapper[4693]: E1212 15:47:16.356442 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:16 crc kubenswrapper[4693]: E1212 15:47:16.356505 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.360031 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.360064 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.360075 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.360100 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.360127 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:16Z","lastTransitionTime":"2025-12-12T15:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.463180 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.463240 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.463249 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.463265 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.463289 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:16Z","lastTransitionTime":"2025-12-12T15:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.565878 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.565923 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.565935 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.565951 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.565961 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:16Z","lastTransitionTime":"2025-12-12T15:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.668304 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.668362 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.668374 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.668402 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.668415 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:16Z","lastTransitionTime":"2025-12-12T15:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.770281 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.770318 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.770326 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.770338 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.770346 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:16Z","lastTransitionTime":"2025-12-12T15:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.872507 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.872548 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.872561 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.872577 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.872589 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:16Z","lastTransitionTime":"2025-12-12T15:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.975872 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.975923 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.975939 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.975960 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:16 crc kubenswrapper[4693]: I1212 15:47:16.975977 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:16Z","lastTransitionTime":"2025-12-12T15:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.078525 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.078575 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.078592 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.078609 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.078619 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:17Z","lastTransitionTime":"2025-12-12T15:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.180796 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.180840 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.180851 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.180866 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.180876 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:17Z","lastTransitionTime":"2025-12-12T15:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.282739 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.282797 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.282808 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.282824 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.282835 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:17Z","lastTransitionTime":"2025-12-12T15:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.356457 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:17 crc kubenswrapper[4693]: E1212 15:47:17.356589 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.385598 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.385638 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.385646 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.385659 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.385669 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:17Z","lastTransitionTime":"2025-12-12T15:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.488226 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.488266 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.488294 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.488311 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.488324 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:17Z","lastTransitionTime":"2025-12-12T15:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.590187 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.590231 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.590242 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.590259 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.590298 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:17Z","lastTransitionTime":"2025-12-12T15:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.692753 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.692783 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.692793 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.692805 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.692813 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:17Z","lastTransitionTime":"2025-12-12T15:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.794672 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.794741 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.794753 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.794772 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.794787 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:17Z","lastTransitionTime":"2025-12-12T15:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.897157 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.897220 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.897238 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.897262 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.897313 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:17Z","lastTransitionTime":"2025-12-12T15:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.999270 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.999326 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.999339 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.999354 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:17 crc kubenswrapper[4693]: I1212 15:47:17.999365 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:17Z","lastTransitionTime":"2025-12-12T15:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.101992 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.102032 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.102040 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.102055 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.102068 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:18Z","lastTransitionTime":"2025-12-12T15:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.204380 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.204458 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.204470 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.204494 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.204510 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:18Z","lastTransitionTime":"2025-12-12T15:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.307129 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.307168 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.307179 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.307195 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.307207 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:18Z","lastTransitionTime":"2025-12-12T15:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.357037 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.357081 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.357046 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:18 crc kubenswrapper[4693]: E1212 15:47:18.357158 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:18 crc kubenswrapper[4693]: E1212 15:47:18.357262 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:18 crc kubenswrapper[4693]: E1212 15:47:18.357345 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.409861 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.409921 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.409936 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.409956 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.409967 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:18Z","lastTransitionTime":"2025-12-12T15:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.512820 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.512864 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.512877 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.512894 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.512909 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:18Z","lastTransitionTime":"2025-12-12T15:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.615907 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.615971 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.615983 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.615999 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.616011 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:18Z","lastTransitionTime":"2025-12-12T15:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.718367 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.718403 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.718413 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.718426 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.718435 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:18Z","lastTransitionTime":"2025-12-12T15:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.820717 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.820750 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.820759 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.820773 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.820782 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:18Z","lastTransitionTime":"2025-12-12T15:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.923206 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.923252 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.923262 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.923296 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:18 crc kubenswrapper[4693]: I1212 15:47:18.923305 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:18Z","lastTransitionTime":"2025-12-12T15:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.025334 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.025377 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.025392 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.025407 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.025417 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:19Z","lastTransitionTime":"2025-12-12T15:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.128853 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.128905 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.128920 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.128937 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.128950 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:19Z","lastTransitionTime":"2025-12-12T15:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.231647 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.231688 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.231699 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.231715 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.231725 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:19Z","lastTransitionTime":"2025-12-12T15:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.334017 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.334085 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.334095 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.334110 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.334127 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:19Z","lastTransitionTime":"2025-12-12T15:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.356386 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:19 crc kubenswrapper[4693]: E1212 15:47:19.356537 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.436059 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.436093 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.436103 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.436116 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.436124 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:19Z","lastTransitionTime":"2025-12-12T15:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.538943 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.539004 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.539018 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.539036 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.539057 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:19Z","lastTransitionTime":"2025-12-12T15:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.641988 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.642047 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.642058 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.642075 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.642087 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:19Z","lastTransitionTime":"2025-12-12T15:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.744031 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.744066 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.744079 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.744097 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.744108 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:19Z","lastTransitionTime":"2025-12-12T15:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.846259 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.846573 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.846640 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.846708 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.846798 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:19Z","lastTransitionTime":"2025-12-12T15:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.949577 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.949636 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.949647 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.949661 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:19 crc kubenswrapper[4693]: I1212 15:47:19.949674 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:19Z","lastTransitionTime":"2025-12-12T15:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.052158 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.052193 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.052200 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.052212 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.052220 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:20Z","lastTransitionTime":"2025-12-12T15:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.154542 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.154575 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.154583 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.154595 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.154603 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:20Z","lastTransitionTime":"2025-12-12T15:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.257238 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.257523 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.257592 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.257662 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.257717 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:20Z","lastTransitionTime":"2025-12-12T15:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.356046 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.356046 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:20 crc kubenswrapper[4693]: E1212 15:47:20.356181 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.356382 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:20 crc kubenswrapper[4693]: E1212 15:47:20.356459 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:20 crc kubenswrapper[4693]: E1212 15:47:20.356635 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.359569 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.359627 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.359645 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.359666 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.359683 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:20Z","lastTransitionTime":"2025-12-12T15:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.461824 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.461875 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.461887 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.461903 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.461914 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:20Z","lastTransitionTime":"2025-12-12T15:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.563856 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.563899 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.563913 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.563930 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.563944 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:20Z","lastTransitionTime":"2025-12-12T15:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.666167 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.666192 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.666199 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.666211 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.666220 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:20Z","lastTransitionTime":"2025-12-12T15:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.768784 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.768836 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.768848 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.768865 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.768876 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:20Z","lastTransitionTime":"2025-12-12T15:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.871789 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.872436 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.872476 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.872496 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.872508 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:20Z","lastTransitionTime":"2025-12-12T15:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.974408 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.974448 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.974459 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.974475 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:20 crc kubenswrapper[4693]: I1212 15:47:20.974486 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:20Z","lastTransitionTime":"2025-12-12T15:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.076886 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.076936 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.076946 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.076965 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.076975 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:21Z","lastTransitionTime":"2025-12-12T15:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.179173 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.179230 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.179243 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.179262 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.179293 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:21Z","lastTransitionTime":"2025-12-12T15:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.281759 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.281870 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.281891 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.281920 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.281959 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:21Z","lastTransitionTime":"2025-12-12T15:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.357027 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:21 crc kubenswrapper[4693]: E1212 15:47:21.358035 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.384847 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.384929 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.384947 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.384968 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.384985 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:21Z","lastTransitionTime":"2025-12-12T15:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.487583 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.487629 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.487641 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.487657 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.487668 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:21Z","lastTransitionTime":"2025-12-12T15:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.589677 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.589733 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.589749 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.589772 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.589791 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:21Z","lastTransitionTime":"2025-12-12T15:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.692774 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.692811 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.692820 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.692833 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.692842 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:21Z","lastTransitionTime":"2025-12-12T15:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.796343 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.796620 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.796694 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.796764 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.796838 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:21Z","lastTransitionTime":"2025-12-12T15:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.899147 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.899194 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.899203 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.899225 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:21 crc kubenswrapper[4693]: I1212 15:47:21.899243 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:21Z","lastTransitionTime":"2025-12-12T15:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.001877 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.001923 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.001934 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.001949 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.001963 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:22Z","lastTransitionTime":"2025-12-12T15:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.104602 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.104831 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.104942 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.105074 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.105176 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:22Z","lastTransitionTime":"2025-12-12T15:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.150064 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.150123 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.150133 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.150154 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.150167 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:22Z","lastTransitionTime":"2025-12-12T15:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:22 crc kubenswrapper[4693]: E1212 15:47:22.162850 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:22Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.166353 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.167228 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.167348 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.167376 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.167400 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:22Z","lastTransitionTime":"2025-12-12T15:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:22 crc kubenswrapper[4693]: E1212 15:47:22.181494 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:22Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.191514 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.191914 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.191939 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.191968 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.191985 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:22Z","lastTransitionTime":"2025-12-12T15:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:22 crc kubenswrapper[4693]: E1212 15:47:22.212821 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:22Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.218048 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.218122 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.218150 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.218173 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.218185 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:22Z","lastTransitionTime":"2025-12-12T15:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:22 crc kubenswrapper[4693]: E1212 15:47:22.233865 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:22Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.238843 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.238916 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.238935 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.238957 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.238971 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:22Z","lastTransitionTime":"2025-12-12T15:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.239878 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs\") pod \"network-metrics-daemon-w4zs6\" (UID: \"6ef3804b-c2b3-4645-b60f-9bc977a89f69\") " pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:22 crc kubenswrapper[4693]: E1212 15:47:22.240054 4693 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:47:22 crc kubenswrapper[4693]: E1212 15:47:22.240156 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs podName:6ef3804b-c2b3-4645-b60f-9bc977a89f69 nodeName:}" failed. No retries permitted until 2025-12-12 15:47:54.240130515 +0000 UTC m=+101.408770276 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs") pod "network-metrics-daemon-w4zs6" (UID: "6ef3804b-c2b3-4645-b60f-9bc977a89f69") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:47:22 crc kubenswrapper[4693]: E1212 15:47:22.253296 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:22Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:22 crc kubenswrapper[4693]: E1212 15:47:22.253421 4693 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.255788 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.255818 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.255829 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.255845 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.255855 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:22Z","lastTransitionTime":"2025-12-12T15:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.356998 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.357049 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:22 crc kubenswrapper[4693]: E1212 15:47:22.357402 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:22 crc kubenswrapper[4693]: E1212 15:47:22.357520 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.357678 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:22 crc kubenswrapper[4693]: E1212 15:47:22.357825 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.359442 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.359472 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.359484 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.359500 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.359513 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:22Z","lastTransitionTime":"2025-12-12T15:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.463385 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.463437 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.463450 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.463468 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.463480 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:22Z","lastTransitionTime":"2025-12-12T15:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.566710 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.566764 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.566778 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.566795 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.566806 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:22Z","lastTransitionTime":"2025-12-12T15:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.668945 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.668974 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.668982 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.668994 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.669003 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:22Z","lastTransitionTime":"2025-12-12T15:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.771849 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.771895 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.771906 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.771925 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.771937 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:22Z","lastTransitionTime":"2025-12-12T15:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.874675 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.874719 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.874730 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.874764 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.874776 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:22Z","lastTransitionTime":"2025-12-12T15:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.978055 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.978122 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.978136 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.978157 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:22 crc kubenswrapper[4693]: I1212 15:47:22.978171 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:22Z","lastTransitionTime":"2025-12-12T15:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.080913 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.080979 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.080996 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.081020 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.081038 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:23Z","lastTransitionTime":"2025-12-12T15:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.183736 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.183788 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.183802 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.183819 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.183831 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:23Z","lastTransitionTime":"2025-12-12T15:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.287420 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.287460 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.287472 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.287485 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.287496 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:23Z","lastTransitionTime":"2025-12-12T15:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.356758 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:23 crc kubenswrapper[4693]: E1212 15:47:23.357478 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.375817 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.388698 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.390010 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.390069 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.390085 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.390106 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.390121 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:23Z","lastTransitionTime":"2025-12-12T15:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.400086 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.411560 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.422292 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.438660 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1212 15:47:11.359001 6402 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1212 15:47:11.359022 6402 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:47:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.449864 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.464574 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.475837 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.488348 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.491633 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.491673 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.491684 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.491740 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.491753 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:23Z","lastTransitionTime":"2025-12-12T15:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.504474 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.518068 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0124f2-8890-495e-919d-da02af9ecd6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6ee772252ca6daf992f916cf2f4fba993106d436c8a192a37b1cf81080c5342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ad52957967efb3497de12a094e81ca9ffc7fc6fb88705e9d16ac22319711e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjdt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.529596 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46c78f0b-1016-48e4-b183-e70a6e692146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9141897abf18bfa9aa4d537e0e117efd7eeb1137e4f4eb0aeb4d68ed07430ff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc30784ce0860622be7856d80caddb1a7f8c510518a0d7dc647eba7bb3671c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36b5280d53c4c3a10ab04273c8f2c02d7118b49f7bcf33eaada7891585e396d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.541679 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.552855 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.564102 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.573896 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w4zs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef3804b-c2b3-4645-b60f-9bc977a89f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w4zs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.593691 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.593730 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.593744 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.593761 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.593773 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:23Z","lastTransitionTime":"2025-12-12T15:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.695413 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.695456 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.695469 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.695484 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.695496 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:23Z","lastTransitionTime":"2025-12-12T15:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.798230 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.798286 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.798298 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.798314 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.798325 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:23Z","lastTransitionTime":"2025-12-12T15:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.809772 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sllz5_e54028d7-cdbb-4fa9-92cd-9570edacb888/kube-multus/0.log" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.809813 4693 generic.go:334] "Generic (PLEG): container finished" podID="e54028d7-cdbb-4fa9-92cd-9570edacb888" containerID="44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc" exitCode=1 Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.809840 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sllz5" event={"ID":"e54028d7-cdbb-4fa9-92cd-9570edacb888","Type":"ContainerDied","Data":"44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc"} Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.810161 4693 scope.go:117] "RemoveContainer" containerID="44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.825088 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.839954 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.853887 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.869564 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.885546 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.901177 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.901448 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.901458 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.901473 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.901432 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:23Z\\\",\\\"message\\\":\\\"2025-12-12T15:46:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_831e5a88-9ce2-4c06-acff-ffdc61ed87eb\\\\n2025-12-12T15:46:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_831e5a88-9ce2-4c06-acff-ffdc61ed87eb to /host/opt/cni/bin/\\\\n2025-12-12T15:46:37Z [verbose] multus-daemon started\\\\n2025-12-12T15:46:37Z [verbose] Readiness Indicator file check\\\\n2025-12-12T15:47:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.901483 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:23Z","lastTransitionTime":"2025-12-12T15:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.921705 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1212 15:47:11.359001 6402 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1212 15:47:11.359022 6402 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:47:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.937042 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46c78f0b-1016-48e4-b183-e70a6e692146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9141897abf18bfa9aa4d537e0e117efd7eeb1137e4f4eb0aeb4d68ed07430ff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc30784ce0860622be7856d80caddb1a7f8c510518a0d7dc647eba7bb3671c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36b5280d53c4c3a10ab04273c8f2c02d7118b49f7bcf33eaada7891585e396d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.948161 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.960083 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.972404 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:23 crc kubenswrapper[4693]: I1212 15:47:23.992508 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:23Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.004163 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0124f2-8890-495e-919d-da02af9ecd6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6ee772252ca6daf992f916cf2f4fba993106d436c8a192a37b1cf81080c5342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ad52957967efb3497de12a094e81ca9ffc7fc6fb88705e9d16ac22319711e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjdt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:24Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.004808 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.004853 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.004865 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.004882 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.004893 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:24Z","lastTransitionTime":"2025-12-12T15:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.017322 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:24Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.028315 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w4zs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef3804b-c2b3-4645-b60f-9bc977a89f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w4zs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:24Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.041858 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:24Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.054182 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:24Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.107585 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.107622 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.107632 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.107648 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.107662 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:24Z","lastTransitionTime":"2025-12-12T15:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.211338 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.211389 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.211400 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.211418 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.211432 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:24Z","lastTransitionTime":"2025-12-12T15:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.313974 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.314038 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.314053 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.314074 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.314092 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:24Z","lastTransitionTime":"2025-12-12T15:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.356923 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:24 crc kubenswrapper[4693]: E1212 15:47:24.357035 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.357189 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:24 crc kubenswrapper[4693]: E1212 15:47:24.357249 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.357491 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:24 crc kubenswrapper[4693]: E1212 15:47:24.357625 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.416100 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.416135 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.416146 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.416162 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.416175 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:24Z","lastTransitionTime":"2025-12-12T15:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.518056 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.518296 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.518396 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.518483 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.518567 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:24Z","lastTransitionTime":"2025-12-12T15:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.621048 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.621084 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.621095 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.621110 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.621120 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:24Z","lastTransitionTime":"2025-12-12T15:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.723194 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.723233 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.723243 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.723257 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.723304 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:24Z","lastTransitionTime":"2025-12-12T15:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.813968 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sllz5_e54028d7-cdbb-4fa9-92cd-9570edacb888/kube-multus/0.log" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.814013 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sllz5" event={"ID":"e54028d7-cdbb-4fa9-92cd-9570edacb888","Type":"ContainerStarted","Data":"3dcd0e248c19f95611ffa8d0a665c032dff039d82f9b088c437e486136574fce"} Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.825693 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.825784 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.825801 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.825824 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.825871 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:24Z","lastTransitionTime":"2025-12-12T15:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.828765 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:24Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.841100 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:24Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.858741 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:24Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.871820 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dcd0e248c19f95611ffa8d0a665c032dff039d82f9b088c437e486136574fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:23Z\\\",\\\"message\\\":\\\"2025-12-12T15:46:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_831e5a88-9ce2-4c06-acff-ffdc61ed87eb\\\\n2025-12-12T15:46:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_831e5a88-9ce2-4c06-acff-ffdc61ed87eb to /host/opt/cni/bin/\\\\n2025-12-12T15:46:37Z [verbose] multus-daemon started\\\\n2025-12-12T15:46:37Z [verbose] Readiness Indicator file check\\\\n2025-12-12T15:47:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:24Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.890554 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1212 15:47:11.359001 6402 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1212 15:47:11.359022 6402 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:47:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:24Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.900576 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:24Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.911599 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:24Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.922045 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:24Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.928250 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.928430 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.928532 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.928644 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.928829 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:24Z","lastTransitionTime":"2025-12-12T15:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.933032 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:24Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.946527 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:24Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.956991 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0124f2-8890-495e-919d-da02af9ecd6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6ee772252ca6daf992f916cf2f4fba993106d436c8a192a37b1cf81080c5342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ad52957967efb3497de12a094e81ca9ffc7fc6fb88705e9d16ac22319711e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjdt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:24Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.968714 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46c78f0b-1016-48e4-b183-e70a6e692146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9141897abf18bfa9aa4d537e0e117efd7eeb1137e4f4eb0aeb4d68ed07430ff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc30784ce0860622be7856d80caddb1a7f8c510518a0d7dc647eba7bb3671c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36b5280d53c4c3a10ab04273c8f2c02d7118b49f7bcf33eaada7891585e396d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:24Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.981541 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:24Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:24 crc kubenswrapper[4693]: I1212 15:47:24.993302 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:24Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.002785 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:25Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.016473 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:25Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.026028 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w4zs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef3804b-c2b3-4645-b60f-9bc977a89f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w4zs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:25Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.031584 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.031719 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.031794 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.031871 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.031951 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:25Z","lastTransitionTime":"2025-12-12T15:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.134131 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.134173 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.134182 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.134197 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.134209 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:25Z","lastTransitionTime":"2025-12-12T15:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.236656 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.236735 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.236745 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.236760 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.236772 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:25Z","lastTransitionTime":"2025-12-12T15:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.339376 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.339425 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.339439 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.339459 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.339474 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:25Z","lastTransitionTime":"2025-12-12T15:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.356895 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:25 crc kubenswrapper[4693]: E1212 15:47:25.357017 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.442641 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.442684 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.442700 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.442720 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.442735 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:25Z","lastTransitionTime":"2025-12-12T15:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.545100 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.545164 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.545180 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.545203 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.545221 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:25Z","lastTransitionTime":"2025-12-12T15:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.647746 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.647793 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.647844 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.647932 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.647962 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:25Z","lastTransitionTime":"2025-12-12T15:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.750809 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.750871 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.750885 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.750900 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.750911 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:25Z","lastTransitionTime":"2025-12-12T15:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.853504 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.853555 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.853573 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.853727 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.853754 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:25Z","lastTransitionTime":"2025-12-12T15:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.956329 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.956398 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.956416 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.956447 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:25 crc kubenswrapper[4693]: I1212 15:47:25.956485 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:25Z","lastTransitionTime":"2025-12-12T15:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.058756 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.058791 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.058801 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.058816 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.058825 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:26Z","lastTransitionTime":"2025-12-12T15:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.162105 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.162241 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.162262 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.162326 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.162346 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:26Z","lastTransitionTime":"2025-12-12T15:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.265552 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.265617 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.265639 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.265667 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.265689 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:26Z","lastTransitionTime":"2025-12-12T15:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.356689 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.356727 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.356743 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:26 crc kubenswrapper[4693]: E1212 15:47:26.357514 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:26 crc kubenswrapper[4693]: E1212 15:47:26.357648 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:26 crc kubenswrapper[4693]: E1212 15:47:26.357864 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.358049 4693 scope.go:117] "RemoveContainer" containerID="b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953" Dec 12 15:47:26 crc kubenswrapper[4693]: E1212 15:47:26.358344 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.369411 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.369520 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.369547 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.369579 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.369612 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:26Z","lastTransitionTime":"2025-12-12T15:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.472477 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.472546 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.472567 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.472589 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.472604 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:26Z","lastTransitionTime":"2025-12-12T15:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.575725 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.575760 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.575768 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.575782 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.575791 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:26Z","lastTransitionTime":"2025-12-12T15:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.678042 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.678089 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.678097 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.678111 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.678119 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:26Z","lastTransitionTime":"2025-12-12T15:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.780698 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.780740 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.780748 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.780762 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.780783 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:26Z","lastTransitionTime":"2025-12-12T15:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.882955 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.882995 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.883004 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.883018 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.883027 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:26Z","lastTransitionTime":"2025-12-12T15:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.985994 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.986243 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.986373 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.986463 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:26 crc kubenswrapper[4693]: I1212 15:47:26.986557 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:26Z","lastTransitionTime":"2025-12-12T15:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.088567 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.088611 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.088622 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.088637 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.088649 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:27Z","lastTransitionTime":"2025-12-12T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.195367 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.195404 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.195415 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.195430 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.195441 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:27Z","lastTransitionTime":"2025-12-12T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.298323 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.298368 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.298380 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.298396 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.298408 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:27Z","lastTransitionTime":"2025-12-12T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.356359 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:27 crc kubenswrapper[4693]: E1212 15:47:27.356561 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.400979 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.401304 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.401425 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.401524 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.401611 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:27Z","lastTransitionTime":"2025-12-12T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.504221 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.504257 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.504284 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.504311 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.504325 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:27Z","lastTransitionTime":"2025-12-12T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.606086 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.606125 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.606136 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.606174 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.606187 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:27Z","lastTransitionTime":"2025-12-12T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.709878 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.709917 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.709926 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.709938 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.709947 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:27Z","lastTransitionTime":"2025-12-12T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.812396 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.812433 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.812461 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.812476 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.812487 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:27Z","lastTransitionTime":"2025-12-12T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.914913 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.914967 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.914977 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.914993 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:27 crc kubenswrapper[4693]: I1212 15:47:27.915004 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:27Z","lastTransitionTime":"2025-12-12T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.016971 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.017038 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.017050 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.017064 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.017075 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:28Z","lastTransitionTime":"2025-12-12T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.119476 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.119563 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.119615 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.119647 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.119668 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:28Z","lastTransitionTime":"2025-12-12T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.222469 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.222529 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.222547 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.222570 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.222588 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:28Z","lastTransitionTime":"2025-12-12T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.325238 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.325317 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.325334 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.325359 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.325376 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:28Z","lastTransitionTime":"2025-12-12T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.356154 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.356214 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:28 crc kubenswrapper[4693]: E1212 15:47:28.356318 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.356437 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:28 crc kubenswrapper[4693]: E1212 15:47:28.356538 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:28 crc kubenswrapper[4693]: E1212 15:47:28.356787 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.367409 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.428148 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.428186 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.428244 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.428263 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.428303 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:28Z","lastTransitionTime":"2025-12-12T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.530822 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.530858 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.530869 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.530883 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.530896 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:28Z","lastTransitionTime":"2025-12-12T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.633521 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.633563 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.633574 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.633589 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.633601 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:28Z","lastTransitionTime":"2025-12-12T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.735200 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.735261 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.735310 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.735333 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.735349 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:28Z","lastTransitionTime":"2025-12-12T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.837002 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.837050 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.837064 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.837082 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.837124 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:28Z","lastTransitionTime":"2025-12-12T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.943193 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.943245 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.943259 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.943297 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:28 crc kubenswrapper[4693]: I1212 15:47:28.943310 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:28Z","lastTransitionTime":"2025-12-12T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.046424 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.046474 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.046486 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.046502 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.046513 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:29Z","lastTransitionTime":"2025-12-12T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.148886 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.148925 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.148934 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.148947 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.148955 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:29Z","lastTransitionTime":"2025-12-12T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.250757 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.250811 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.250824 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.250890 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.250907 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:29Z","lastTransitionTime":"2025-12-12T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.353065 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.353125 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.353138 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.353159 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.353173 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:29Z","lastTransitionTime":"2025-12-12T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.356638 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:29 crc kubenswrapper[4693]: E1212 15:47:29.356757 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.455727 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.455769 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.455778 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.455795 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.455806 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:29Z","lastTransitionTime":"2025-12-12T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.558635 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.558706 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.558718 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.558732 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.558741 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:29Z","lastTransitionTime":"2025-12-12T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.661339 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.661393 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.661405 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.661420 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.661430 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:29Z","lastTransitionTime":"2025-12-12T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.763914 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.763976 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.763987 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.764030 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.764041 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:29Z","lastTransitionTime":"2025-12-12T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.865992 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.866034 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.866044 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.866061 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.866072 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:29Z","lastTransitionTime":"2025-12-12T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.968709 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.968776 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.968798 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.968827 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:29 crc kubenswrapper[4693]: I1212 15:47:29.968847 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:29Z","lastTransitionTime":"2025-12-12T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.072124 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.072191 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.072216 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.072243 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.072263 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:30Z","lastTransitionTime":"2025-12-12T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.175734 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.175799 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.175815 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.175841 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.175857 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:30Z","lastTransitionTime":"2025-12-12T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.278115 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.278152 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.278161 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.278176 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.278187 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:30Z","lastTransitionTime":"2025-12-12T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.356590 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:30 crc kubenswrapper[4693]: E1212 15:47:30.356728 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.356759 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.356786 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:30 crc kubenswrapper[4693]: E1212 15:47:30.356912 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:30 crc kubenswrapper[4693]: E1212 15:47:30.356992 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.380926 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.380994 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.381007 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.381024 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.381036 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:30Z","lastTransitionTime":"2025-12-12T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.483023 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.483062 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.483074 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.483090 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.483101 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:30Z","lastTransitionTime":"2025-12-12T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.585901 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.585963 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.585982 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.586008 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.586029 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:30Z","lastTransitionTime":"2025-12-12T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.688140 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.688199 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.688211 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.688226 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.688237 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:30Z","lastTransitionTime":"2025-12-12T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.790951 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.790997 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.791006 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.791037 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.791046 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:30Z","lastTransitionTime":"2025-12-12T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.893162 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.893230 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.893242 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.893263 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.893295 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:30Z","lastTransitionTime":"2025-12-12T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.995740 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.995780 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.995791 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.995806 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:30 crc kubenswrapper[4693]: I1212 15:47:30.995817 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:30Z","lastTransitionTime":"2025-12-12T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.098340 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.098398 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.098411 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.098426 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.098459 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:31Z","lastTransitionTime":"2025-12-12T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.201128 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.201159 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.201166 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.201180 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.201188 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:31Z","lastTransitionTime":"2025-12-12T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.303714 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.303780 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.303792 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.303812 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.303824 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:31Z","lastTransitionTime":"2025-12-12T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.356870 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:31 crc kubenswrapper[4693]: E1212 15:47:31.357026 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.406534 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.406596 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.406605 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.406619 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.406628 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:31Z","lastTransitionTime":"2025-12-12T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.510343 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.510384 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.510394 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.510408 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.510448 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:31Z","lastTransitionTime":"2025-12-12T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.612174 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.612232 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.612241 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.612259 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.612292 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:31Z","lastTransitionTime":"2025-12-12T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.714729 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.714772 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.714787 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.714805 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.714818 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:31Z","lastTransitionTime":"2025-12-12T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.817975 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.818003 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.818011 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.818025 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.818034 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:31Z","lastTransitionTime":"2025-12-12T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.921438 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.921591 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.921619 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.921645 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:31 crc kubenswrapper[4693]: I1212 15:47:31.921666 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:31Z","lastTransitionTime":"2025-12-12T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.024912 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.024955 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.024994 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.025016 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.025028 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:32Z","lastTransitionTime":"2025-12-12T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.128088 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.128169 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.128184 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.128204 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.128217 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:32Z","lastTransitionTime":"2025-12-12T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.231399 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.231495 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.231518 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.231542 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.231560 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:32Z","lastTransitionTime":"2025-12-12T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.334736 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.334809 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.334847 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.334876 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.334897 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:32Z","lastTransitionTime":"2025-12-12T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.356854 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.356941 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.356977 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:32 crc kubenswrapper[4693]: E1212 15:47:32.357053 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:32 crc kubenswrapper[4693]: E1212 15:47:32.357441 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:32 crc kubenswrapper[4693]: E1212 15:47:32.357717 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.380758 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.437181 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.437251 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.437359 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.437394 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.437417 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:32Z","lastTransitionTime":"2025-12-12T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.460527 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.460572 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.460590 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.460613 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.460632 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:32Z","lastTransitionTime":"2025-12-12T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:32 crc kubenswrapper[4693]: E1212 15:47:32.483884 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:32Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.489672 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.489735 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.489791 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.489843 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.489868 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:32Z","lastTransitionTime":"2025-12-12T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:32 crc kubenswrapper[4693]: E1212 15:47:32.509680 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:32Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.513023 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.513061 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.513073 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.513089 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.513098 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:32Z","lastTransitionTime":"2025-12-12T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:32 crc kubenswrapper[4693]: E1212 15:47:32.524885 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:32Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.530088 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.530142 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.530154 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.530170 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.530182 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:32Z","lastTransitionTime":"2025-12-12T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:32 crc kubenswrapper[4693]: E1212 15:47:32.544779 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:32Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.548325 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.548386 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.548433 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.548457 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.548468 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:32Z","lastTransitionTime":"2025-12-12T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:32 crc kubenswrapper[4693]: E1212 15:47:32.562855 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:32Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:32 crc kubenswrapper[4693]: E1212 15:47:32.563027 4693 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.564907 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.564960 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.564969 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.564983 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.564994 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:32Z","lastTransitionTime":"2025-12-12T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.667811 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.667866 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.667881 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.667902 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.667917 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:32Z","lastTransitionTime":"2025-12-12T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.770366 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.770417 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.770431 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.770448 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.770461 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:32Z","lastTransitionTime":"2025-12-12T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.872350 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.872388 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.872404 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.872423 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.872433 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:32Z","lastTransitionTime":"2025-12-12T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.975006 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.975040 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.975048 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.975062 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:32 crc kubenswrapper[4693]: I1212 15:47:32.975073 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:32Z","lastTransitionTime":"2025-12-12T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.077659 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.077710 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.077722 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.077738 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.077750 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:33Z","lastTransitionTime":"2025-12-12T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.182499 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.182639 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.182662 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.183097 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.183400 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:33Z","lastTransitionTime":"2025-12-12T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.286376 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.286441 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.286462 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.286490 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.286510 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:33Z","lastTransitionTime":"2025-12-12T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.356799 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:33 crc kubenswrapper[4693]: E1212 15:47:33.357049 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.389355 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.389405 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.389422 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.389444 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.389463 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:33Z","lastTransitionTime":"2025-12-12T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.396250 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1212 15:47:11.359001 6402 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1212 15:47:11.359022 6402 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:47:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:33Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.412596 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:33Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.435520 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:33Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.454602 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e4586ed-cc1b-4024-a4a7-aa0431052bad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23217ef6881b3e63efba7e3f80279f3a3a967f82adaaaee3ce1235a1164e2f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138206b8b174ebead583b6953999e7e3f8699191291ba8635a106d8ed56efbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://138206b8b174ebead583b6953999e7e3f8699191291ba8635a106d8ed56efbb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:33Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.476335 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:33Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.491990 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.492052 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.492087 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.492128 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.492154 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:33Z","lastTransitionTime":"2025-12-12T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.500374 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:33Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.515617 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:33Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.532507 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dcd0e248c19f95611ffa8d0a665c032dff039d82f9b088c437e486136574fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:23Z\\\",\\\"message\\\":\\\"2025-12-12T15:46:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_831e5a88-9ce2-4c06-acff-ffdc61ed87eb\\\\n2025-12-12T15:46:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_831e5a88-9ce2-4c06-acff-ffdc61ed87eb to /host/opt/cni/bin/\\\\n2025-12-12T15:46:37Z [verbose] multus-daemon started\\\\n2025-12-12T15:46:37Z [verbose] Readiness Indicator file check\\\\n2025-12-12T15:47:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:33Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.546370 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46c78f0b-1016-48e4-b183-e70a6e692146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9141897abf18bfa9aa4d537e0e117efd7eeb1137e4f4eb0aeb4d68ed07430ff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc30784ce0860622be7856d80caddb1a7f8c510518a0d7dc647eba7bb3671c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36b5280d53c4c3a10ab04273c8f2c02d7118b49f7bcf33eaada7891585e396d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:33Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.560780 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:33Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.576172 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:33Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.589294 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:33Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.595441 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.595501 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.595518 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.595540 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.595552 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:33Z","lastTransitionTime":"2025-12-12T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.609593 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:33Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.621713 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0124f2-8890-495e-919d-da02af9ecd6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6ee772252ca6daf992f916cf2f4fba993106d436c8a192a37b1cf81080c5342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ad52957967efb3497de12a094e81ca9ffc7fc6fb88705e9d16ac22319711e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjdt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:33Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.634034 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:33Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.645798 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w4zs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef3804b-c2b3-4645-b60f-9bc977a89f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w4zs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:33Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.665733 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08221ae4-3d15-4ff7-825f-cc2ce2b72537\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcec8f0c1c45bdf87fbd59304e0059ebc71ad896e88f3033611e2179259226e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e63cb5d27ac7c233ff4d15cd75532081dd0a4da7c8cb027bf2d500952e0711e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71bf63aa4388b0b929872aed61fe7eb400fa636b9e479395331e3ed433b2ad79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e74fc49bf4c47ad5e84f055d0a28da0a1a77c4aead41edab8df49991ff250fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b8a94d6e3115a3afb2daec3d094b3b600e283e93c7f601999eebc5c5543db39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b1b97779c4ee45d0ecc02bd4fddf2ca83c945878e2dff9464b4141686b35fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b1b97779c4ee45d0ecc02bd4fddf2ca83c945878e2dff9464b4141686b35fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f250f611d130fc50b8e55150a897a3883f81556a7ba929f6dadb35c352dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://765f250f611d130fc50b8e55150a897a3883f81556a7ba929f6dadb35c352dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de183a390733f9a095b9f0ddb181c9e04a8092d555b74ffc3b3d91b48b3c3b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de183a390733f9a095b9f0ddb181c9e04a8092d555b74ffc3b3d91b48b3c3b10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:33Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.679924 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:33Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.696592 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:33Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.698191 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.698225 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.698235 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.698249 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.698259 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:33Z","lastTransitionTime":"2025-12-12T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.801714 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.801831 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.801855 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.801875 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.801894 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:33Z","lastTransitionTime":"2025-12-12T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.905410 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.905485 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.905500 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.905515 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:33 crc kubenswrapper[4693]: I1212 15:47:33.905527 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:33Z","lastTransitionTime":"2025-12-12T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.008473 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.008527 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.008538 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.008555 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.008567 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:34Z","lastTransitionTime":"2025-12-12T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.111207 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.111253 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.111265 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.111302 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.111315 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:34Z","lastTransitionTime":"2025-12-12T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.214438 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.214498 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.214511 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.214528 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.214540 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:34Z","lastTransitionTime":"2025-12-12T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.317357 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.317462 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.317488 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.317516 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.317540 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:34Z","lastTransitionTime":"2025-12-12T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.356897 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.356967 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:34 crc kubenswrapper[4693]: E1212 15:47:34.357067 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.356903 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:34 crc kubenswrapper[4693]: E1212 15:47:34.357214 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:34 crc kubenswrapper[4693]: E1212 15:47:34.357425 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.420775 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.420855 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.420880 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.420909 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.420932 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:34Z","lastTransitionTime":"2025-12-12T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.523961 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.524024 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.524037 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.524060 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.524074 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:34Z","lastTransitionTime":"2025-12-12T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.626893 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.626942 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.626958 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.626981 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.626998 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:34Z","lastTransitionTime":"2025-12-12T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.730200 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.730332 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.730349 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.730372 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.730392 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:34Z","lastTransitionTime":"2025-12-12T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.833667 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.833754 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.833783 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.833812 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.833834 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:34Z","lastTransitionTime":"2025-12-12T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.937356 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.937445 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.937457 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.937474 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:34 crc kubenswrapper[4693]: I1212 15:47:34.937487 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:34Z","lastTransitionTime":"2025-12-12T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.040627 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.040667 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.040681 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.040698 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.040710 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:35Z","lastTransitionTime":"2025-12-12T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.144145 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.144207 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.144219 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.144242 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.144255 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:35Z","lastTransitionTime":"2025-12-12T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.246750 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.246789 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.246805 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.246829 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.246855 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:35Z","lastTransitionTime":"2025-12-12T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.350476 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.350543 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.350560 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.350590 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.350613 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:35Z","lastTransitionTime":"2025-12-12T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.356877 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:35 crc kubenswrapper[4693]: E1212 15:47:35.357068 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.454511 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.454590 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.454609 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.454638 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.454659 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:35Z","lastTransitionTime":"2025-12-12T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.557656 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.557692 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.557703 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.557743 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.557758 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:35Z","lastTransitionTime":"2025-12-12T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.660997 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.661059 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.661069 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.661089 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.661101 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:35Z","lastTransitionTime":"2025-12-12T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.763891 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.763938 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.763951 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.763971 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.763984 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:35Z","lastTransitionTime":"2025-12-12T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.866410 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.866492 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.866507 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.866527 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.866543 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:35Z","lastTransitionTime":"2025-12-12T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.969110 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.969216 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.969230 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.969248 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:35 crc kubenswrapper[4693]: I1212 15:47:35.969261 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:35Z","lastTransitionTime":"2025-12-12T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.072586 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.072648 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.072664 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.072686 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.072702 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:36Z","lastTransitionTime":"2025-12-12T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.174950 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.175100 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.175121 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.175145 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.175162 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:36Z","lastTransitionTime":"2025-12-12T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.281201 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.281309 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.281329 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.281352 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.281367 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:36Z","lastTransitionTime":"2025-12-12T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.356555 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.356675 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:36 crc kubenswrapper[4693]: E1212 15:47:36.356769 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.356923 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:36 crc kubenswrapper[4693]: E1212 15:47:36.357124 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:36 crc kubenswrapper[4693]: E1212 15:47:36.357368 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.384494 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.384562 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.384578 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.384603 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.384622 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:36Z","lastTransitionTime":"2025-12-12T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.490492 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.490554 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.490568 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.490588 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.490607 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:36Z","lastTransitionTime":"2025-12-12T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.593499 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.593574 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.593590 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.593620 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.593643 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:36Z","lastTransitionTime":"2025-12-12T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.695943 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.695995 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.696007 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.696023 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.696037 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:36Z","lastTransitionTime":"2025-12-12T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.798362 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.798856 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.798947 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.798993 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.799021 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:36Z","lastTransitionTime":"2025-12-12T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.902143 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.902207 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.902225 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.902249 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:36 crc kubenswrapper[4693]: I1212 15:47:36.902294 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:36Z","lastTransitionTime":"2025-12-12T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.005072 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.005132 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.005149 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.005171 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.005188 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:37Z","lastTransitionTime":"2025-12-12T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.107131 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.107179 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.107188 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.107202 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.107213 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:37Z","lastTransitionTime":"2025-12-12T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.209026 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.209066 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.209074 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.209088 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.209097 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:37Z","lastTransitionTime":"2025-12-12T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.312342 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.312630 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.312768 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.312806 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.312827 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:37Z","lastTransitionTime":"2025-12-12T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.356541 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:37 crc kubenswrapper[4693]: E1212 15:47:37.356719 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.415576 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.415615 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.415623 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.415638 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.415647 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:37Z","lastTransitionTime":"2025-12-12T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.518760 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.518831 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.518847 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.518870 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.518887 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:37Z","lastTransitionTime":"2025-12-12T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.622106 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.622151 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.622163 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.622180 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.622192 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:37Z","lastTransitionTime":"2025-12-12T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.724719 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.724783 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.724804 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.724823 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.724838 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:37Z","lastTransitionTime":"2025-12-12T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.828208 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.828261 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.828289 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.828304 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.828312 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:37Z","lastTransitionTime":"2025-12-12T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.930223 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.930255 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.930265 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.930297 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:37 crc kubenswrapper[4693]: I1212 15:47:37.930310 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:37Z","lastTransitionTime":"2025-12-12T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.033740 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.033827 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.033846 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.033864 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.033878 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:38Z","lastTransitionTime":"2025-12-12T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.136621 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.136674 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.136688 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.136706 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.136719 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:38Z","lastTransitionTime":"2025-12-12T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.239187 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.239240 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.239250 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.239263 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.239293 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:38Z","lastTransitionTime":"2025-12-12T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.341688 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.341735 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.341745 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.341764 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.341776 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:38Z","lastTransitionTime":"2025-12-12T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.356094 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:38 crc kubenswrapper[4693]: E1212 15:47:38.356213 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.356095 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.356093 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:38 crc kubenswrapper[4693]: E1212 15:47:38.356444 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:38 crc kubenswrapper[4693]: E1212 15:47:38.356550 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.444322 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.444382 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.444390 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.444404 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.444414 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:38Z","lastTransitionTime":"2025-12-12T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.547691 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.547784 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.547794 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.547814 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.547827 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:38Z","lastTransitionTime":"2025-12-12T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.651113 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.651180 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.651197 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.651223 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.651241 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:38Z","lastTransitionTime":"2025-12-12T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.754053 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.754118 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.754128 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.754144 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.754154 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:38Z","lastTransitionTime":"2025-12-12T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.856197 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.856245 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.856259 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.856293 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.856305 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:38Z","lastTransitionTime":"2025-12-12T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.958995 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.959045 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.959059 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.959077 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:38 crc kubenswrapper[4693]: I1212 15:47:38.959090 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:38Z","lastTransitionTime":"2025-12-12T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.062717 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.062781 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.062797 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.062818 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.062833 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:39Z","lastTransitionTime":"2025-12-12T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.121570 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.121694 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.121731 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:39 crc kubenswrapper[4693]: E1212 15:47:39.121769 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:43.12173001 +0000 UTC m=+150.290369631 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.121834 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:39 crc kubenswrapper[4693]: E1212 15:47:39.121857 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:47:39 crc kubenswrapper[4693]: E1212 15:47:39.121875 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:47:39 crc kubenswrapper[4693]: E1212 15:47:39.121888 4693 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.121925 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:39 crc kubenswrapper[4693]: E1212 15:47:39.121944 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:43.121927025 +0000 UTC m=+150.290566636 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:47:39 crc kubenswrapper[4693]: E1212 15:47:39.121857 4693 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:47:39 crc kubenswrapper[4693]: E1212 15:47:39.122009 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 15:47:39 crc kubenswrapper[4693]: E1212 15:47:39.122030 4693 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 15:47:39 crc kubenswrapper[4693]: E1212 15:47:39.122039 4693 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:47:39 crc kubenswrapper[4693]: E1212 15:47:39.122040 4693 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:47:39 crc kubenswrapper[4693]: E1212 15:47:39.121987 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:43.121979406 +0000 UTC m=+150.290619017 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 15:47:39 crc kubenswrapper[4693]: E1212 15:47:39.122223 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:43.122208832 +0000 UTC m=+150.290848473 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 15:47:39 crc kubenswrapper[4693]: E1212 15:47:39.122245 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:43.122235723 +0000 UTC m=+150.290875394 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.165767 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.165839 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.165851 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.165871 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.165883 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:39Z","lastTransitionTime":"2025-12-12T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.268726 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.268782 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.268796 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.268813 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.268841 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:39Z","lastTransitionTime":"2025-12-12T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.356445 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:39 crc kubenswrapper[4693]: E1212 15:47:39.356610 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.370992 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.371054 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.371064 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.371078 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.371088 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:39Z","lastTransitionTime":"2025-12-12T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.474149 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.474219 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.474237 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.474290 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.474309 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:39Z","lastTransitionTime":"2025-12-12T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.576926 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.576991 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.577013 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.577043 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.577088 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:39Z","lastTransitionTime":"2025-12-12T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.679139 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.679167 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.679175 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.679188 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.679198 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:39Z","lastTransitionTime":"2025-12-12T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.782535 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.782594 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.782611 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.782635 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.782654 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:39Z","lastTransitionTime":"2025-12-12T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.885458 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.885501 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.885510 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.885528 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.885538 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:39Z","lastTransitionTime":"2025-12-12T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.988771 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.989346 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.989536 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.989704 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:39 crc kubenswrapper[4693]: I1212 15:47:39.989881 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:39Z","lastTransitionTime":"2025-12-12T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.093183 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.093604 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.093778 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.093939 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.094072 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:40Z","lastTransitionTime":"2025-12-12T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.196854 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.196895 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.196908 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.196924 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.196936 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:40Z","lastTransitionTime":"2025-12-12T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.299582 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.299649 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.299662 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.299681 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.299694 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:40Z","lastTransitionTime":"2025-12-12T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.356185 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.356357 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:40 crc kubenswrapper[4693]: E1212 15:47:40.356440 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.356455 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:40 crc kubenswrapper[4693]: E1212 15:47:40.356550 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:40 crc kubenswrapper[4693]: E1212 15:47:40.356641 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.402723 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.402770 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.402789 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.402809 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.402823 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:40Z","lastTransitionTime":"2025-12-12T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.505215 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.505532 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.505541 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.505555 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.505564 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:40Z","lastTransitionTime":"2025-12-12T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.608167 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.608217 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.608232 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.608253 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.608290 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:40Z","lastTransitionTime":"2025-12-12T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.710956 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.710994 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.711002 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.711016 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.711024 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:40Z","lastTransitionTime":"2025-12-12T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.813729 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.813775 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.813786 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.813806 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.813819 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:40Z","lastTransitionTime":"2025-12-12T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.916174 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.916220 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.916228 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.916244 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:40 crc kubenswrapper[4693]: I1212 15:47:40.916254 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:40Z","lastTransitionTime":"2025-12-12T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.018989 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.019032 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.019043 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.019059 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.019069 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:41Z","lastTransitionTime":"2025-12-12T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.122652 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.122728 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.122741 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.122758 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.122771 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:41Z","lastTransitionTime":"2025-12-12T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.225310 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.225398 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.225416 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.225436 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.225482 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:41Z","lastTransitionTime":"2025-12-12T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.327754 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.327819 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.327830 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.327864 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.327877 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:41Z","lastTransitionTime":"2025-12-12T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.357082 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:41 crc kubenswrapper[4693]: E1212 15:47:41.357361 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.358553 4693 scope.go:117] "RemoveContainer" containerID="b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.429926 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.429959 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.429967 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.429980 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.429990 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:41Z","lastTransitionTime":"2025-12-12T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.531934 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.532417 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.532429 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.532479 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.532497 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:41Z","lastTransitionTime":"2025-12-12T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.634399 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.634452 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.634468 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.634489 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.634506 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:41Z","lastTransitionTime":"2025-12-12T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.738371 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.738416 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.738438 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.738454 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.738466 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:41Z","lastTransitionTime":"2025-12-12T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.840638 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.840676 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.840687 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.840703 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.840728 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:41Z","lastTransitionTime":"2025-12-12T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.866637 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ps9gt_fa7eae7d-b662-434d-96c1-de3080d579bd/ovnkube-controller/2.log" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.869106 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerStarted","Data":"15049e5d253208466f13edd4c70b412f962d59285671ce1b0e0d86b8088e7147"} Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.869784 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.880710 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.893414 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.904110 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e4586ed-cc1b-4024-a4a7-aa0431052bad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23217ef6881b3e63efba7e3f80279f3a3a967f82adaaaee3ce1235a1164e2f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138206b8b174ebead583b6953999e7e3f8699191291ba8635a106d8ed56efbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://138206b8b174ebead583b6953999e7e3f8699191291ba8635a106d8ed56efbb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.914689 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.926415 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.938813 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.942556 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.942596 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.942610 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.942627 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.942638 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:41Z","lastTransitionTime":"2025-12-12T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.953037 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dcd0e248c19f95611ffa8d0a665c032dff039d82f9b088c437e486136574fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:23Z\\\",\\\"message\\\":\\\"2025-12-12T15:46:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_831e5a88-9ce2-4c06-acff-ffdc61ed87eb\\\\n2025-12-12T15:46:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_831e5a88-9ce2-4c06-acff-ffdc61ed87eb to /host/opt/cni/bin/\\\\n2025-12-12T15:46:37Z [verbose] multus-daemon started\\\\n2025-12-12T15:46:37Z [verbose] Readiness Indicator file check\\\\n2025-12-12T15:47:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.971753 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15049e5d253208466f13edd4c70b412f962d59285671ce1b0e0d86b8088e7147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1212 15:47:11.359001 6402 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1212 15:47:11.359022 6402 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:47:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.983957 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46c78f0b-1016-48e4-b183-e70a6e692146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9141897abf18bfa9aa4d537e0e117efd7eeb1137e4f4eb0aeb4d68ed07430ff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc30784ce0860622be7856d80caddb1a7f8c510518a0d7dc647eba7bb3671c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36b5280d53c4c3a10ab04273c8f2c02d7118b49f7bcf33eaada7891585e396d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:41 crc kubenswrapper[4693]: I1212 15:47:41.999281 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:41Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.010685 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.020117 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.031794 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.042398 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0124f2-8890-495e-919d-da02af9ecd6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6ee772252ca6daf992f916cf2f4fba993106d436c8a192a37b1cf81080c5342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ad52957967efb3497de12a094e81ca9ffc7fc6fb88705e9d16ac22319711e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjdt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.044931 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.044973 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.044984 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.045000 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.045011 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:42Z","lastTransitionTime":"2025-12-12T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.055167 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.067070 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w4zs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef3804b-c2b3-4645-b60f-9bc977a89f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w4zs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.089996 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08221ae4-3d15-4ff7-825f-cc2ce2b72537\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcec8f0c1c45bdf87fbd59304e0059ebc71ad896e88f3033611e2179259226e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e63cb5d27ac7c233ff4d15cd75532081dd0a4da7c8cb027bf2d500952e0711e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71bf63aa4388b0b929872aed61fe7eb400fa636b9e479395331e3ed433b2ad79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e74fc49bf4c47ad5e84f055d0a28da0a1a77c4aead41edab8df49991ff250fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b8a94d6e3115a3afb2daec3d094b3b600e283e93c7f601999eebc5c5543db39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b1b97779c4ee45d0ecc02bd4fddf2ca83c945878e2dff9464b4141686b35fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b1b97779c4ee45d0ecc02bd4fddf2ca83c945878e2dff9464b4141686b35fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f250f611d130fc50b8e55150a897a3883f81556a7ba929f6dadb35c352dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://765f250f611d130fc50b8e55150a897a3883f81556a7ba929f6dadb35c352dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de183a390733f9a095b9f0ddb181c9e04a8092d555b74ffc3b3d91b48b3c3b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de183a390733f9a095b9f0ddb181c9e04a8092d555b74ffc3b3d91b48b3c3b10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.101997 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.116198 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.146728 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.146769 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.146780 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.146797 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.146811 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:42Z","lastTransitionTime":"2025-12-12T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.249652 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.249691 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.249702 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.249719 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.249730 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:42Z","lastTransitionTime":"2025-12-12T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.352230 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.352297 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.352309 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.352328 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.352340 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:42Z","lastTransitionTime":"2025-12-12T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.356615 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.356702 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.356874 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:42 crc kubenswrapper[4693]: E1212 15:47:42.357002 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:42 crc kubenswrapper[4693]: E1212 15:47:42.357056 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:42 crc kubenswrapper[4693]: E1212 15:47:42.357119 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.455546 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.455633 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.455660 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.455687 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.455708 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:42Z","lastTransitionTime":"2025-12-12T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.558620 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.558668 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.558678 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.558694 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.558707 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:42Z","lastTransitionTime":"2025-12-12T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.661322 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.661398 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.661694 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.661740 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.661753 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:42Z","lastTransitionTime":"2025-12-12T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.744247 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.744321 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.744336 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.744354 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.744481 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:42Z","lastTransitionTime":"2025-12-12T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:42 crc kubenswrapper[4693]: E1212 15:47:42.762859 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.767578 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.767630 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.767640 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.767656 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.767667 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:42Z","lastTransitionTime":"2025-12-12T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:42 crc kubenswrapper[4693]: E1212 15:47:42.785164 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.788549 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.788589 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.788606 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.788626 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.788640 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:42Z","lastTransitionTime":"2025-12-12T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:42 crc kubenswrapper[4693]: E1212 15:47:42.799958 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.803630 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.803672 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.803688 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.803707 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.803722 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:42Z","lastTransitionTime":"2025-12-12T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:42 crc kubenswrapper[4693]: E1212 15:47:42.819396 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.822856 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.822891 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.822902 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.822915 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.822923 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:42Z","lastTransitionTime":"2025-12-12T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:42 crc kubenswrapper[4693]: E1212 15:47:42.838419 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:42 crc kubenswrapper[4693]: E1212 15:47:42.838576 4693 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.840575 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.840634 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.840655 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.840688 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.840714 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:42Z","lastTransitionTime":"2025-12-12T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.874779 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ps9gt_fa7eae7d-b662-434d-96c1-de3080d579bd/ovnkube-controller/3.log" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.875463 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ps9gt_fa7eae7d-b662-434d-96c1-de3080d579bd/ovnkube-controller/2.log" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.878929 4693 generic.go:334] "Generic (PLEG): container finished" podID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerID="15049e5d253208466f13edd4c70b412f962d59285671ce1b0e0d86b8088e7147" exitCode=1 Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.878975 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerDied","Data":"15049e5d253208466f13edd4c70b412f962d59285671ce1b0e0d86b8088e7147"} Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.879011 4693 scope.go:117] "RemoveContainer" containerID="b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.879919 4693 scope.go:117] "RemoveContainer" containerID="15049e5d253208466f13edd4c70b412f962d59285671ce1b0e0d86b8088e7147" Dec 12 15:47:42 crc kubenswrapper[4693]: E1212 15:47:42.880237 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.892881 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.912008 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.922896 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0124f2-8890-495e-919d-da02af9ecd6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6ee772252ca6daf992f916cf2f4fba993106d436c8a192a37b1cf81080c5342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ad52957967efb3497de12a094e81ca9ffc7fc6fb88705e9d16ac22319711e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjdt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.937340 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46c78f0b-1016-48e4-b183-e70a6e692146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9141897abf18bfa9aa4d537e0e117efd7eeb1137e4f4eb0aeb4d68ed07430ff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc30784ce0860622be7856d80caddb1a7f8c510518a0d7dc647eba7bb3671c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36b5280d53c4c3a10ab04273c8f2c02d7118b49f7bcf33eaada7891585e396d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.942351 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.942381 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.942392 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.942408 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.942420 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:42Z","lastTransitionTime":"2025-12-12T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.951909 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.968588 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.981978 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:42 crc kubenswrapper[4693]: I1212 15:47:42.992467 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w4zs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef3804b-c2b3-4645-b60f-9bc977a89f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w4zs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:42Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.009773 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08221ae4-3d15-4ff7-825f-cc2ce2b72537\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcec8f0c1c45bdf87fbd59304e0059ebc71ad896e88f3033611e2179259226e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e63cb5d27ac7c233ff4d15cd75532081dd0a4da7c8cb027bf2d500952e0711e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71bf63aa4388b0b929872aed61fe7eb400fa636b9e479395331e3ed433b2ad79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e74fc49bf4c47ad5e84f055d0a28da0a1a77c4aead41edab8df49991ff250fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b8a94d6e3115a3afb2daec3d094b3b600e283e93c7f601999eebc5c5543db39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b1b97779c4ee45d0ecc02bd4fddf2ca83c945878e2dff9464b4141686b35fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b1b97779c4ee45d0ecc02bd4fddf2ca83c945878e2dff9464b4141686b35fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f250f611d130fc50b8e55150a897a3883f81556a7ba929f6dadb35c352dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://765f250f611d130fc50b8e55150a897a3883f81556a7ba929f6dadb35c352dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de183a390733f9a095b9f0ddb181c9e04a8092d555b74ffc3b3d91b48b3c3b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de183a390733f9a095b9f0ddb181c9e04a8092d555b74ffc3b3d91b48b3c3b10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.022340 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.032929 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.043644 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.044731 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.044844 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.044911 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.044971 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.045033 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:43Z","lastTransitionTime":"2025-12-12T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.053619 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.065286 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dcd0e248c19f95611ffa8d0a665c032dff039d82f9b088c437e486136574fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:23Z\\\",\\\"message\\\":\\\"2025-12-12T15:46:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_831e5a88-9ce2-4c06-acff-ffdc61ed87eb\\\\n2025-12-12T15:46:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_831e5a88-9ce2-4c06-acff-ffdc61ed87eb to /host/opt/cni/bin/\\\\n2025-12-12T15:46:37Z [verbose] multus-daemon started\\\\n2025-12-12T15:46:37Z [verbose] Readiness Indicator file check\\\\n2025-12-12T15:47:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.081909 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15049e5d253208466f13edd4c70b412f962d59285671ce1b0e0d86b8088e7147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1212 15:47:11.359001 6402 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1212 15:47:11.359022 6402 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:47:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15049e5d253208466f13edd4c70b412f962d59285671ce1b0e0d86b8088e7147\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"s.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1212 15:47:42.192885 6881 services_controller.go:360] Finished syncing service network-metrics-service on namespace openshift-multus for network=default : 20.271µs\\\\nI1212 15:47:42.192902 6881 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/olm-operator-metrics for network=default\\\\nI1212 15:47:42.192788 6881 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1212 15:47:42.192926 6881 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.091209 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.101147 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.110534 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e4586ed-cc1b-4024-a4a7-aa0431052bad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23217ef6881b3e63efba7e3f80279f3a3a967f82adaaaee3ce1235a1164e2f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138206b8b174ebead583b6953999e7e3f8699191291ba8635a106d8ed56efbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://138206b8b174ebead583b6953999e7e3f8699191291ba8635a106d8ed56efbb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.121135 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.148213 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.148261 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.148304 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.148322 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.148331 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:43Z","lastTransitionTime":"2025-12-12T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.250777 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.251018 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.251099 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.251176 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.251254 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:43Z","lastTransitionTime":"2025-12-12T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.353446 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.353517 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.353540 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.353615 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.353711 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:43Z","lastTransitionTime":"2025-12-12T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.356023 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:43 crc kubenswrapper[4693]: E1212 15:47:43.356132 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.368971 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46c78f0b-1016-48e4-b183-e70a6e692146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9141897abf18bfa9aa4d537e0e117efd7eeb1137e4f4eb0aeb4d68ed07430ff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc30784ce0860622be7856d80caddb1a7f8c510518a0d7dc647eba7bb3671c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36b5280d53c4c3a10ab04273c8f2c02d7118b49f7bcf33eaada7891585e396d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.389727 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.404804 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.415801 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.431462 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.445812 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0124f2-8890-495e-919d-da02af9ecd6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6ee772252ca6daf992f916cf2f4fba993106d436c8a192a37b1cf81080c5342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ad52957967efb3497de12a094e81ca9ffc7fc6fb88705e9d16ac22319711e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjdt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.455680 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.455855 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.455947 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.456031 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.456121 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:43Z","lastTransitionTime":"2025-12-12T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.461035 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.474429 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w4zs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef3804b-c2b3-4645-b60f-9bc977a89f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w4zs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.507948 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08221ae4-3d15-4ff7-825f-cc2ce2b72537\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcec8f0c1c45bdf87fbd59304e0059ebc71ad896e88f3033611e2179259226e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e63cb5d27ac7c233ff4d15cd75532081dd0a4da7c8cb027bf2d500952e0711e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71bf63aa4388b0b929872aed61fe7eb400fa636b9e479395331e3ed433b2ad79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e74fc49bf4c47ad5e84f055d0a28da0a1a77c4aead41edab8df49991ff250fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b8a94d6e3115a3afb2daec3d094b3b600e283e93c7f601999eebc5c5543db39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b1b97779c4ee45d0ecc02bd4fddf2ca83c945878e2dff9464b4141686b35fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b1b97779c4ee45d0ecc02bd4fddf2ca83c945878e2dff9464b4141686b35fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f250f611d130fc50b8e55150a897a3883f81556a7ba929f6dadb35c352dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://765f250f611d130fc50b8e55150a897a3883f81556a7ba929f6dadb35c352dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de183a390733f9a095b9f0ddb181c9e04a8092d555b74ffc3b3d91b48b3c3b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de183a390733f9a095b9f0ddb181c9e04a8092d555b74ffc3b3d91b48b3c3b10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.524519 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.538051 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.548301 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.558415 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.558465 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.558476 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.558494 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.558506 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:43Z","lastTransitionTime":"2025-12-12T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.560664 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.573353 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e4586ed-cc1b-4024-a4a7-aa0431052bad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23217ef6881b3e63efba7e3f80279f3a3a967f82adaaaee3ce1235a1164e2f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138206b8b174ebead583b6953999e7e3f8699191291ba8635a106d8ed56efbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://138206b8b174ebead583b6953999e7e3f8699191291ba8635a106d8ed56efbb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.585226 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.602915 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.618096 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.635557 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dcd0e248c19f95611ffa8d0a665c032dff039d82f9b088c437e486136574fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:23Z\\\",\\\"message\\\":\\\"2025-12-12T15:46:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_831e5a88-9ce2-4c06-acff-ffdc61ed87eb\\\\n2025-12-12T15:46:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_831e5a88-9ce2-4c06-acff-ffdc61ed87eb to /host/opt/cni/bin/\\\\n2025-12-12T15:46:37Z [verbose] multus-daemon started\\\\n2025-12-12T15:46:37Z [verbose] Readiness Indicator file check\\\\n2025-12-12T15:47:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.657202 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15049e5d253208466f13edd4c70b412f962d59285671ce1b0e0d86b8088e7147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f4f1f4da067a7cd40de3a0a8a34b76771c2e101411824bc3157e46ba7e8953\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:11Z\\\",\\\"message\\\":\\\"Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1212 15:47:11.359001 6402 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1212 15:47:11.359022 6402 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:47:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15049e5d253208466f13edd4c70b412f962d59285671ce1b0e0d86b8088e7147\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"s.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1212 15:47:42.192885 6881 services_controller.go:360] Finished syncing service network-metrics-service on namespace openshift-multus for network=default : 20.271µs\\\\nI1212 15:47:42.192902 6881 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/olm-operator-metrics for network=default\\\\nI1212 15:47:42.192788 6881 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1212 15:47:42.192926 6881 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.661321 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.661390 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.661406 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.661426 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.661502 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:43Z","lastTransitionTime":"2025-12-12T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.764311 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.764342 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.764351 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.764365 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.764377 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:43Z","lastTransitionTime":"2025-12-12T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.867310 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.867347 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.867355 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.867369 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.867378 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:43Z","lastTransitionTime":"2025-12-12T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.883906 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ps9gt_fa7eae7d-b662-434d-96c1-de3080d579bd/ovnkube-controller/3.log" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.888446 4693 scope.go:117] "RemoveContainer" containerID="15049e5d253208466f13edd4c70b412f962d59285671ce1b0e0d86b8088e7147" Dec 12 15:47:43 crc kubenswrapper[4693]: E1212 15:47:43.888603 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.907390 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.920339 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w4zs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef3804b-c2b3-4645-b60f-9bc977a89f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w4zs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.940522 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08221ae4-3d15-4ff7-825f-cc2ce2b72537\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcec8f0c1c45bdf87fbd59304e0059ebc71ad896e88f3033611e2179259226e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e63cb5d27ac7c233ff4d15cd75532081dd0a4da7c8cb027bf2d500952e0711e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71bf63aa4388b0b929872aed61fe7eb400fa636b9e479395331e3ed433b2ad79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e74fc49bf4c47ad5e84f055d0a28da0a1a77c4aead41edab8df49991ff250fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b8a94d6e3115a3afb2daec3d094b3b600e283e93c7f601999eebc5c5543db39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b1b97779c4ee45d0ecc02bd4fddf2ca83c945878e2dff9464b4141686b35fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b1b97779c4ee45d0ecc02bd4fddf2ca83c945878e2dff9464b4141686b35fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f250f611d130fc50b8e55150a897a3883f81556a7ba929f6dadb35c352dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://765f250f611d130fc50b8e55150a897a3883f81556a7ba929f6dadb35c352dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de183a390733f9a095b9f0ddb181c9e04a8092d555b74ffc3b3d91b48b3c3b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de183a390733f9a095b9f0ddb181c9e04a8092d555b74ffc3b3d91b48b3c3b10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.954372 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.965796 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.969074 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.969095 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.969105 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.969118 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.969127 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:43Z","lastTransitionTime":"2025-12-12T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:43 crc kubenswrapper[4693]: I1212 15:47:43.978465 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dcd0e248c19f95611ffa8d0a665c032dff039d82f9b088c437e486136574fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:23Z\\\",\\\"message\\\":\\\"2025-12-12T15:46:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_831e5a88-9ce2-4c06-acff-ffdc61ed87eb\\\\n2025-12-12T15:46:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_831e5a88-9ce2-4c06-acff-ffdc61ed87eb to /host/opt/cni/bin/\\\\n2025-12-12T15:46:37Z [verbose] multus-daemon started\\\\n2025-12-12T15:46:37Z [verbose] Readiness Indicator file check\\\\n2025-12-12T15:47:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.000176 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15049e5d253208466f13edd4c70b412f962d59285671ce1b0e0d86b8088e7147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15049e5d253208466f13edd4c70b412f962d59285671ce1b0e0d86b8088e7147\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"s.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1212 15:47:42.192885 6881 services_controller.go:360] Finished syncing service network-metrics-service on namespace openshift-multus for network=default : 20.271µs\\\\nI1212 15:47:42.192902 6881 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/olm-operator-metrics for network=default\\\\nI1212 15:47:42.192788 6881 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1212 15:47:42.192926 6881 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:47:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:43Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.011374 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.024385 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.038472 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e4586ed-cc1b-4024-a4a7-aa0431052bad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23217ef6881b3e63efba7e3f80279f3a3a967f82adaaaee3ce1235a1164e2f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138206b8b174ebead583b6953999e7e3f8699191291ba8635a106d8ed56efbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://138206b8b174ebead583b6953999e7e3f8699191291ba8635a106d8ed56efbb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.054711 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.068871 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.071607 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.071637 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.071647 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.071662 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.071674 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:44Z","lastTransitionTime":"2025-12-12T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.078851 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.089810 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0124f2-8890-495e-919d-da02af9ecd6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6ee772252ca6daf992f916cf2f4fba993106d436c8a192a37b1cf81080c5342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ad52957967efb3497de12a094e81ca9ffc7fc6fb88705e9d16ac22319711e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjdt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.100406 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46c78f0b-1016-48e4-b183-e70a6e692146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9141897abf18bfa9aa4d537e0e117efd7eeb1137e4f4eb0aeb4d68ed07430ff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc30784ce0860622be7856d80caddb1a7f8c510518a0d7dc647eba7bb3671c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36b5280d53c4c3a10ab04273c8f2c02d7118b49f7bcf33eaada7891585e396d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.111616 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.122079 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.132849 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.146745 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:44Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.173230 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.173291 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.173304 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.173319 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.173331 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:44Z","lastTransitionTime":"2025-12-12T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.275098 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.275138 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.275148 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.275162 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.275172 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:44Z","lastTransitionTime":"2025-12-12T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.356612 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.356744 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:44 crc kubenswrapper[4693]: E1212 15:47:44.356759 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.356834 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:44 crc kubenswrapper[4693]: E1212 15:47:44.356957 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:44 crc kubenswrapper[4693]: E1212 15:47:44.357068 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.377920 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.377938 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.377946 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.377968 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.377976 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:44Z","lastTransitionTime":"2025-12-12T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.480623 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.480671 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.480689 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.480712 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.480728 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:44Z","lastTransitionTime":"2025-12-12T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.584257 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.584378 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.584403 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.584466 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.584488 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:44Z","lastTransitionTime":"2025-12-12T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.687252 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.687310 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.687320 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.687334 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.687343 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:44Z","lastTransitionTime":"2025-12-12T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.790111 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.790166 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.790179 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.790197 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.790210 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:44Z","lastTransitionTime":"2025-12-12T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.892719 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.892769 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.892780 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.892796 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.892808 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:44Z","lastTransitionTime":"2025-12-12T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.995572 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.995628 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.995644 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.995668 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:44 crc kubenswrapper[4693]: I1212 15:47:44.995687 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:44Z","lastTransitionTime":"2025-12-12T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.098798 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.098889 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.098914 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.098965 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.098990 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:45Z","lastTransitionTime":"2025-12-12T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.202174 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.202225 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.202242 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.202266 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.202316 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:45Z","lastTransitionTime":"2025-12-12T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.305635 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.305666 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.305674 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.305686 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.305696 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:45Z","lastTransitionTime":"2025-12-12T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.356583 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:45 crc kubenswrapper[4693]: E1212 15:47:45.356816 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.408003 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.408052 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.408063 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.408082 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.408097 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:45Z","lastTransitionTime":"2025-12-12T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.510501 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.510543 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.510555 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.510572 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.510584 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:45Z","lastTransitionTime":"2025-12-12T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.613528 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.613656 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.613682 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.613713 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.613735 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:45Z","lastTransitionTime":"2025-12-12T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.716142 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.716193 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.716206 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.716220 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.716230 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:45Z","lastTransitionTime":"2025-12-12T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.818712 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.818762 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.818778 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.818798 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.818812 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:45Z","lastTransitionTime":"2025-12-12T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.921393 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.921458 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.921481 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.921509 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:45 crc kubenswrapper[4693]: I1212 15:47:45.921530 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:45Z","lastTransitionTime":"2025-12-12T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.024840 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.024916 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.024959 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.024994 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.025017 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:46Z","lastTransitionTime":"2025-12-12T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.128454 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.128522 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.128544 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.128572 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.128596 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:46Z","lastTransitionTime":"2025-12-12T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.231843 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.231902 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.231925 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.231957 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.231979 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:46Z","lastTransitionTime":"2025-12-12T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.335949 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.336037 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.336052 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.336072 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.336086 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:46Z","lastTransitionTime":"2025-12-12T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.356438 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.356499 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:46 crc kubenswrapper[4693]: E1212 15:47:46.356622 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.356674 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:46 crc kubenswrapper[4693]: E1212 15:47:46.357353 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:46 crc kubenswrapper[4693]: E1212 15:47:46.357512 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.439287 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.439337 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.439347 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.439364 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.439376 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:46Z","lastTransitionTime":"2025-12-12T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.541943 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.542008 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.542027 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.542052 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.542070 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:46Z","lastTransitionTime":"2025-12-12T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.644995 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.645053 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.645076 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.645097 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.645112 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:46Z","lastTransitionTime":"2025-12-12T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.747894 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.747942 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.747953 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.747968 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.747978 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:46Z","lastTransitionTime":"2025-12-12T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.851395 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.851477 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.851501 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.851533 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.851558 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:46Z","lastTransitionTime":"2025-12-12T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.954199 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.954262 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.954297 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.954319 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:46 crc kubenswrapper[4693]: I1212 15:47:46.954351 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:46Z","lastTransitionTime":"2025-12-12T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.057679 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.057732 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.057743 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.057761 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.057773 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:47Z","lastTransitionTime":"2025-12-12T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.160835 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.160880 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.160891 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.160909 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.160920 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:47Z","lastTransitionTime":"2025-12-12T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.263366 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.263419 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.263429 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.263446 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.263459 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:47Z","lastTransitionTime":"2025-12-12T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.356747 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:47 crc kubenswrapper[4693]: E1212 15:47:47.356889 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.365752 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.365808 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.365831 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.365868 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.365890 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:47Z","lastTransitionTime":"2025-12-12T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.468940 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.469000 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.469014 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.469035 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.469050 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:47Z","lastTransitionTime":"2025-12-12T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.572162 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.572213 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.572226 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.572245 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.572258 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:47Z","lastTransitionTime":"2025-12-12T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.675187 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.675238 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.675252 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.675294 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.675307 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:47Z","lastTransitionTime":"2025-12-12T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.778752 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.778828 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.778849 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.778875 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.778892 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:47Z","lastTransitionTime":"2025-12-12T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.882551 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.882634 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.882664 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.882692 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.882715 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:47Z","lastTransitionTime":"2025-12-12T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.985393 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.985465 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.985485 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.985508 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:47 crc kubenswrapper[4693]: I1212 15:47:47.985526 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:47Z","lastTransitionTime":"2025-12-12T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.088322 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.088451 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.088476 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.088510 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.088532 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:48Z","lastTransitionTime":"2025-12-12T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.191438 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.191494 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.191511 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.191533 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.191549 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:48Z","lastTransitionTime":"2025-12-12T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.296621 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.296696 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.296728 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.296757 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.296780 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:48Z","lastTransitionTime":"2025-12-12T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.356441 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.356489 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.356542 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:48 crc kubenswrapper[4693]: E1212 15:47:48.356569 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:48 crc kubenswrapper[4693]: E1212 15:47:48.356748 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:48 crc kubenswrapper[4693]: E1212 15:47:48.356787 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.399127 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.399158 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.399166 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.399178 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.399187 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:48Z","lastTransitionTime":"2025-12-12T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.501712 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.501784 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.501807 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.501836 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.501861 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:48Z","lastTransitionTime":"2025-12-12T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.604515 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.604576 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.604597 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.604623 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.604643 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:48Z","lastTransitionTime":"2025-12-12T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.707883 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.707936 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.707952 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.707971 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.707987 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:48Z","lastTransitionTime":"2025-12-12T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.810429 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.810476 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.810487 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.810504 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.810517 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:48Z","lastTransitionTime":"2025-12-12T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.913219 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.913313 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.913326 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.913341 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:48 crc kubenswrapper[4693]: I1212 15:47:48.913351 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:48Z","lastTransitionTime":"2025-12-12T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.016988 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.017048 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.017068 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.017092 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.017109 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:49Z","lastTransitionTime":"2025-12-12T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.119087 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.119138 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.119182 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.119202 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.119214 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:49Z","lastTransitionTime":"2025-12-12T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.221895 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.221929 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.222065 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.222081 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.222089 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:49Z","lastTransitionTime":"2025-12-12T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.325122 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.325165 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.325176 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.325190 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.325199 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:49Z","lastTransitionTime":"2025-12-12T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.357008 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:49 crc kubenswrapper[4693]: E1212 15:47:49.357190 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.427675 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.427727 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.427739 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.427756 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.427767 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:49Z","lastTransitionTime":"2025-12-12T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.529937 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.530021 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.530041 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.530061 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.530075 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:49Z","lastTransitionTime":"2025-12-12T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.631817 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.631854 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.631865 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.631879 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.631889 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:49Z","lastTransitionTime":"2025-12-12T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.734057 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.734092 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.734102 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.734117 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.734128 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:49Z","lastTransitionTime":"2025-12-12T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.836840 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.836910 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.836933 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.836963 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.836985 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:49Z","lastTransitionTime":"2025-12-12T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.939955 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.940015 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.940032 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.940055 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:49 crc kubenswrapper[4693]: I1212 15:47:49.940073 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:49Z","lastTransitionTime":"2025-12-12T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.043144 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.043229 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.043254 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.043318 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.043357 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:50Z","lastTransitionTime":"2025-12-12T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.145874 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.145929 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.145941 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.145959 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.145972 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:50Z","lastTransitionTime":"2025-12-12T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.247888 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.247945 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.247959 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.247981 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.247995 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:50Z","lastTransitionTime":"2025-12-12T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.350359 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.350436 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.350489 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.350529 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.350555 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:50Z","lastTransitionTime":"2025-12-12T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.356774 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.356795 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.356844 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:50 crc kubenswrapper[4693]: E1212 15:47:50.357095 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:50 crc kubenswrapper[4693]: E1212 15:47:50.357349 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:50 crc kubenswrapper[4693]: E1212 15:47:50.357258 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.453066 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.453114 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.453126 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.453143 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.453155 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:50Z","lastTransitionTime":"2025-12-12T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.557531 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.557573 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.557583 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.557600 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.557611 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:50Z","lastTransitionTime":"2025-12-12T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.660082 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.660167 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.660181 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.660198 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.660208 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:50Z","lastTransitionTime":"2025-12-12T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.762525 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.762573 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.762585 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.762602 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.762615 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:50Z","lastTransitionTime":"2025-12-12T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.864715 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.864757 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.864766 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.864780 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.864793 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:50Z","lastTransitionTime":"2025-12-12T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.967738 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.967826 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.967841 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.967862 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:50 crc kubenswrapper[4693]: I1212 15:47:50.967879 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:50Z","lastTransitionTime":"2025-12-12T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.070699 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.070750 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.070787 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.070810 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.070828 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:51Z","lastTransitionTime":"2025-12-12T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.174056 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.174121 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.174137 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.174155 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.174190 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:51Z","lastTransitionTime":"2025-12-12T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.276964 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.277038 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.277056 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.277085 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.277105 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:51Z","lastTransitionTime":"2025-12-12T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.358830 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:51 crc kubenswrapper[4693]: E1212 15:47:51.359226 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.379427 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.379469 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.379478 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.379493 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.379505 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:51Z","lastTransitionTime":"2025-12-12T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.482061 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.482105 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.482114 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.482149 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.482159 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:51Z","lastTransitionTime":"2025-12-12T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.585126 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.585195 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.585222 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.585252 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.585313 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:51Z","lastTransitionTime":"2025-12-12T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.688539 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.688802 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.688869 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.688945 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.689017 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:51Z","lastTransitionTime":"2025-12-12T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.792311 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.792371 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.792384 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.792406 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.792423 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:51Z","lastTransitionTime":"2025-12-12T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.896675 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.897087 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.897181 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.897292 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:51 crc kubenswrapper[4693]: I1212 15:47:51.897413 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:51Z","lastTransitionTime":"2025-12-12T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.000414 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.000459 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.000472 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.000489 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.000500 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:52Z","lastTransitionTime":"2025-12-12T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.102560 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.102601 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.102612 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.102627 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.102637 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:52Z","lastTransitionTime":"2025-12-12T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.205142 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.205191 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.205203 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.205220 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.205232 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:52Z","lastTransitionTime":"2025-12-12T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.307861 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.307918 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.307937 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.307963 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.307979 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:52Z","lastTransitionTime":"2025-12-12T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.356994 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.357106 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.357071 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:52 crc kubenswrapper[4693]: E1212 15:47:52.357216 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:52 crc kubenswrapper[4693]: E1212 15:47:52.357364 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:52 crc kubenswrapper[4693]: E1212 15:47:52.357531 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.409842 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.409908 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.409932 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.409960 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.409981 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:52Z","lastTransitionTime":"2025-12-12T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.512235 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.512302 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.512314 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.512330 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.512342 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:52Z","lastTransitionTime":"2025-12-12T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.614760 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.614807 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.614819 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.614840 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.614854 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:52Z","lastTransitionTime":"2025-12-12T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.717209 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.717308 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.717325 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.717341 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.717350 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:52Z","lastTransitionTime":"2025-12-12T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.820046 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.820090 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.820101 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.820118 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.820130 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:52Z","lastTransitionTime":"2025-12-12T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.914679 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.914711 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.914719 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.914732 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.914742 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:52Z","lastTransitionTime":"2025-12-12T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:52 crc kubenswrapper[4693]: E1212 15:47:52.932436 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:52Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.936724 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.936761 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.936771 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.936786 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.936796 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:52Z","lastTransitionTime":"2025-12-12T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:52 crc kubenswrapper[4693]: E1212 15:47:52.952695 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:52Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.957543 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.957603 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.957615 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.957630 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.957663 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:52Z","lastTransitionTime":"2025-12-12T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:52 crc kubenswrapper[4693]: E1212 15:47:52.991708 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:52Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.999074 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.999135 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.999151 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.999177 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:52 crc kubenswrapper[4693]: I1212 15:47:52.999196 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:52Z","lastTransitionTime":"2025-12-12T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:53 crc kubenswrapper[4693]: E1212 15:47:53.020494 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.025133 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.025173 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.025185 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.025204 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.025213 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:53Z","lastTransitionTime":"2025-12-12T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:53 crc kubenswrapper[4693]: E1212 15:47:53.038588 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:53 crc kubenswrapper[4693]: E1212 15:47:53.038756 4693 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.040490 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.040532 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.040541 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.040557 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.040567 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:53Z","lastTransitionTime":"2025-12-12T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.143862 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.143912 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.143922 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.143938 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.143956 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:53Z","lastTransitionTime":"2025-12-12T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.246435 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.246478 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.246488 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.246504 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.246515 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:53Z","lastTransitionTime":"2025-12-12T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.348875 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.348925 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.348937 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.348955 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.348968 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:53Z","lastTransitionTime":"2025-12-12T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.356136 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:53 crc kubenswrapper[4693]: E1212 15:47:53.356260 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.372873 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.386375 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w4zs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef3804b-c2b3-4645-b60f-9bc977a89f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w4zs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.407449 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08221ae4-3d15-4ff7-825f-cc2ce2b72537\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcec8f0c1c45bdf87fbd59304e0059ebc71ad896e88f3033611e2179259226e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e63cb5d27ac7c233ff4d15cd75532081dd0a4da7c8cb027bf2d500952e0711e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71bf63aa4388b0b929872aed61fe7eb400fa636b9e479395331e3ed433b2ad79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e74fc49bf4c47ad5e84f055d0a28da0a1a77c4aead41edab8df49991ff250fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b8a94d6e3115a3afb2daec3d094b3b600e283e93c7f601999eebc5c5543db39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b1b97779c4ee45d0ecc02bd4fddf2ca83c945878e2dff9464b4141686b35fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b1b97779c4ee45d0ecc02bd4fddf2ca83c945878e2dff9464b4141686b35fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f250f611d130fc50b8e55150a897a3883f81556a7ba929f6dadb35c352dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://765f250f611d130fc50b8e55150a897a3883f81556a7ba929f6dadb35c352dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de183a390733f9a095b9f0ddb181c9e04a8092d555b74ffc3b3d91b48b3c3b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de183a390733f9a095b9f0ddb181c9e04a8092d555b74ffc3b3d91b48b3c3b10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.426153 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.439607 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.452112 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.452156 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.452165 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.452181 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.452191 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:53Z","lastTransitionTime":"2025-12-12T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.452877 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.468788 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e4586ed-cc1b-4024-a4a7-aa0431052bad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23217ef6881b3e63efba7e3f80279f3a3a967f82adaaaee3ce1235a1164e2f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138206b8b174ebead583b6953999e7e3f8699191291ba8635a106d8ed56efbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://138206b8b174ebead583b6953999e7e3f8699191291ba8635a106d8ed56efbb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.486968 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.500860 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.516247 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.527937 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dcd0e248c19f95611ffa8d0a665c032dff039d82f9b088c437e486136574fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:23Z\\\",\\\"message\\\":\\\"2025-12-12T15:46:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_831e5a88-9ce2-4c06-acff-ffdc61ed87eb\\\\n2025-12-12T15:46:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_831e5a88-9ce2-4c06-acff-ffdc61ed87eb to /host/opt/cni/bin/\\\\n2025-12-12T15:46:37Z [verbose] multus-daemon started\\\\n2025-12-12T15:46:37Z [verbose] Readiness Indicator file check\\\\n2025-12-12T15:47:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.550337 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15049e5d253208466f13edd4c70b412f962d59285671ce1b0e0d86b8088e7147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15049e5d253208466f13edd4c70b412f962d59285671ce1b0e0d86b8088e7147\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"s.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1212 15:47:42.192885 6881 services_controller.go:360] Finished syncing service network-metrics-service on namespace openshift-multus for network=default : 20.271µs\\\\nI1212 15:47:42.192902 6881 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/olm-operator-metrics for network=default\\\\nI1212 15:47:42.192788 6881 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1212 15:47:42.192926 6881 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:47:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.555155 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.555188 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.555196 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.555212 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.555221 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:53Z","lastTransitionTime":"2025-12-12T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.562576 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.575435 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46c78f0b-1016-48e4-b183-e70a6e692146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9141897abf18bfa9aa4d537e0e117efd7eeb1137e4f4eb0aeb4d68ed07430ff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc30784ce0860622be7856d80caddb1a7f8c510518a0d7dc647eba7bb3671c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36b5280d53c4c3a10ab04273c8f2c02d7118b49f7bcf33eaada7891585e396d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.590477 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.607736 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.619785 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.638421 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.650553 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0124f2-8890-495e-919d-da02af9ecd6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6ee772252ca6daf992f916cf2f4fba993106d436c8a192a37b1cf81080c5342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ad52957967efb3497de12a094e81ca9ffc7fc6fb88705e9d16ac22319711e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjdt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:47:53Z is after 2025-08-24T17:21:41Z" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.658151 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.658196 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.658206 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.658220 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.658231 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:53Z","lastTransitionTime":"2025-12-12T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.761461 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.761535 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.761558 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.761579 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.761593 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:53Z","lastTransitionTime":"2025-12-12T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.864353 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.864462 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.864480 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.864501 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.864515 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:53Z","lastTransitionTime":"2025-12-12T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.966769 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.966823 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.966838 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.966864 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:53 crc kubenswrapper[4693]: I1212 15:47:53.966880 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:53Z","lastTransitionTime":"2025-12-12T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.069575 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.069752 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.069776 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.069799 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.069816 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:54Z","lastTransitionTime":"2025-12-12T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.173926 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.173988 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.174001 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.174019 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.174031 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:54Z","lastTransitionTime":"2025-12-12T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.276936 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.276995 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.277010 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.277030 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.277045 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:54Z","lastTransitionTime":"2025-12-12T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.279701 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs\") pod \"network-metrics-daemon-w4zs6\" (UID: \"6ef3804b-c2b3-4645-b60f-9bc977a89f69\") " pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:54 crc kubenswrapper[4693]: E1212 15:47:54.279869 4693 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:47:54 crc kubenswrapper[4693]: E1212 15:47:54.279931 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs podName:6ef3804b-c2b3-4645-b60f-9bc977a89f69 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:58.27991313 +0000 UTC m=+165.448552741 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs") pod "network-metrics-daemon-w4zs6" (UID: "6ef3804b-c2b3-4645-b60f-9bc977a89f69") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.357036 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:54 crc kubenswrapper[4693]: E1212 15:47:54.357259 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.357055 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.357056 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:54 crc kubenswrapper[4693]: E1212 15:47:54.357544 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:54 crc kubenswrapper[4693]: E1212 15:47:54.357638 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.380248 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.380332 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.380346 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.380367 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.380385 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:54Z","lastTransitionTime":"2025-12-12T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.482641 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.482724 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.482746 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.482776 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.482796 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:54Z","lastTransitionTime":"2025-12-12T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.585334 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.585394 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.585421 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.585462 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.585518 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:54Z","lastTransitionTime":"2025-12-12T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.688777 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.688849 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.688866 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.688892 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.688910 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:54Z","lastTransitionTime":"2025-12-12T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.791454 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.791529 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.791543 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.791561 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.791574 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:54Z","lastTransitionTime":"2025-12-12T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.894925 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.894977 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.894991 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.895010 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.895022 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:54Z","lastTransitionTime":"2025-12-12T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.997849 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.997907 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.997922 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.997941 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:54 crc kubenswrapper[4693]: I1212 15:47:54.997957 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:54Z","lastTransitionTime":"2025-12-12T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.100803 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.100878 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.100891 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.100914 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.100927 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:55Z","lastTransitionTime":"2025-12-12T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.203885 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.203926 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.203941 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.203958 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.203968 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:55Z","lastTransitionTime":"2025-12-12T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.307734 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.307793 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.307808 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.307827 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.307842 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:55Z","lastTransitionTime":"2025-12-12T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.356374 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:55 crc kubenswrapper[4693]: E1212 15:47:55.356735 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.357031 4693 scope.go:117] "RemoveContainer" containerID="15049e5d253208466f13edd4c70b412f962d59285671ce1b0e0d86b8088e7147" Dec 12 15:47:55 crc kubenswrapper[4693]: E1212 15:47:55.357186 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.410657 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.410700 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.410711 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.410726 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.410737 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:55Z","lastTransitionTime":"2025-12-12T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.513309 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.513361 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.513376 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.513394 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.513404 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:55Z","lastTransitionTime":"2025-12-12T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.616454 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.616518 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.616541 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.616568 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.616590 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:55Z","lastTransitionTime":"2025-12-12T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.718743 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.718779 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.718787 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.718801 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.718810 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:55Z","lastTransitionTime":"2025-12-12T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.821426 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.821482 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.821492 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.821507 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.821519 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:55Z","lastTransitionTime":"2025-12-12T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.924042 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.924104 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.924120 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.924142 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:55 crc kubenswrapper[4693]: I1212 15:47:55.924162 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:55Z","lastTransitionTime":"2025-12-12T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.026310 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.026356 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.026371 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.026389 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.026402 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:56Z","lastTransitionTime":"2025-12-12T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.129329 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.129366 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.129378 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.129391 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.129402 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:56Z","lastTransitionTime":"2025-12-12T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.232508 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.232556 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.232567 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.232585 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.232598 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:56Z","lastTransitionTime":"2025-12-12T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.335936 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.335988 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.336001 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.336017 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.336031 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:56Z","lastTransitionTime":"2025-12-12T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.356417 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.356534 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:56 crc kubenswrapper[4693]: E1212 15:47:56.356645 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.356657 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:56 crc kubenswrapper[4693]: E1212 15:47:56.356768 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:56 crc kubenswrapper[4693]: E1212 15:47:56.356953 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.438299 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.438354 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.438371 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.438395 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.438412 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:56Z","lastTransitionTime":"2025-12-12T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.540954 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.541034 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.541057 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.541080 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.541095 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:56Z","lastTransitionTime":"2025-12-12T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.643032 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.643082 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.643095 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.643112 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.643131 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:56Z","lastTransitionTime":"2025-12-12T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.745858 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.745895 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.745904 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.745948 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.745958 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:56Z","lastTransitionTime":"2025-12-12T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.848796 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.848840 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.848850 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.848874 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.848884 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:56Z","lastTransitionTime":"2025-12-12T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.952424 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.952476 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.952487 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.952505 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:56 crc kubenswrapper[4693]: I1212 15:47:56.952516 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:56Z","lastTransitionTime":"2025-12-12T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.056010 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.056078 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.056094 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.056116 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.056129 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:57Z","lastTransitionTime":"2025-12-12T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.159345 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.159407 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.159421 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.159441 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.159461 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:57Z","lastTransitionTime":"2025-12-12T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.275941 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.275981 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.275992 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.276008 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.276019 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:57Z","lastTransitionTime":"2025-12-12T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.357502 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:57 crc kubenswrapper[4693]: E1212 15:47:57.357888 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.378076 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.378121 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.378134 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.378149 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.378161 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:57Z","lastTransitionTime":"2025-12-12T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.480941 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.481008 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.481020 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.481040 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.481053 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:57Z","lastTransitionTime":"2025-12-12T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.584144 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.584202 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.584217 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.584239 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.584254 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:57Z","lastTransitionTime":"2025-12-12T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.687835 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.687912 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.687937 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.687971 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.688003 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:57Z","lastTransitionTime":"2025-12-12T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.790901 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.790971 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.790994 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.791027 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.791052 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:57Z","lastTransitionTime":"2025-12-12T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.893344 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.893396 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.893408 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.893428 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.893442 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:57Z","lastTransitionTime":"2025-12-12T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.995950 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.996025 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.996042 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.996065 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:57 crc kubenswrapper[4693]: I1212 15:47:57.996082 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:57Z","lastTransitionTime":"2025-12-12T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.098671 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.098725 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.098734 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.098747 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.098771 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:58Z","lastTransitionTime":"2025-12-12T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.200781 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.200827 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.200842 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.200859 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.200872 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:58Z","lastTransitionTime":"2025-12-12T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.303535 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.303720 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.303771 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.303792 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.303804 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:58Z","lastTransitionTime":"2025-12-12T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.356458 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.356503 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.356513 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:47:58 crc kubenswrapper[4693]: E1212 15:47:58.356602 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:47:58 crc kubenswrapper[4693]: E1212 15:47:58.356755 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:47:58 crc kubenswrapper[4693]: E1212 15:47:58.356832 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.406239 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.406309 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.406324 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.406343 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.406356 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:58Z","lastTransitionTime":"2025-12-12T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.508526 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.508573 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.508585 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.508603 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.508616 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:58Z","lastTransitionTime":"2025-12-12T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.611212 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.611295 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.611312 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.611336 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.611352 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:58Z","lastTransitionTime":"2025-12-12T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.713841 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.713878 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.713885 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.713901 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.713912 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:58Z","lastTransitionTime":"2025-12-12T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.815846 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.815876 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.815893 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.815911 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.815922 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:58Z","lastTransitionTime":"2025-12-12T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.918053 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.918079 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.918089 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.918101 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:58 crc kubenswrapper[4693]: I1212 15:47:58.918110 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:58Z","lastTransitionTime":"2025-12-12T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.021113 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.021199 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.021253 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.021318 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.021389 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:59Z","lastTransitionTime":"2025-12-12T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.124756 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.124841 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.124869 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.124907 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.124944 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:59Z","lastTransitionTime":"2025-12-12T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.228099 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.228157 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.228170 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.228189 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.228197 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:59Z","lastTransitionTime":"2025-12-12T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.331145 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.331189 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.331198 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.331213 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.331223 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:59Z","lastTransitionTime":"2025-12-12T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.356668 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:47:59 crc kubenswrapper[4693]: E1212 15:47:59.357063 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.433575 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.433620 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.433635 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.433653 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.433665 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:59Z","lastTransitionTime":"2025-12-12T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.538481 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.538553 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.538571 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.538594 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.538613 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:59Z","lastTransitionTime":"2025-12-12T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.641027 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.641098 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.641121 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.641145 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.641171 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:59Z","lastTransitionTime":"2025-12-12T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.743645 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.743692 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.743704 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.743720 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.743732 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:59Z","lastTransitionTime":"2025-12-12T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.846517 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.846584 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.846603 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.846626 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.846642 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:59Z","lastTransitionTime":"2025-12-12T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.949370 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.949418 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.949435 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.949455 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:47:59 crc kubenswrapper[4693]: I1212 15:47:59.949466 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:47:59Z","lastTransitionTime":"2025-12-12T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.051893 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.051942 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.051952 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.051969 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.051981 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:00Z","lastTransitionTime":"2025-12-12T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.154345 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.154401 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.154417 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.154436 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.154453 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:00Z","lastTransitionTime":"2025-12-12T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.256431 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.256480 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.256506 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.256530 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.256546 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:00Z","lastTransitionTime":"2025-12-12T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.356594 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.356732 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.356762 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:00 crc kubenswrapper[4693]: E1212 15:48:00.356917 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:48:00 crc kubenswrapper[4693]: E1212 15:48:00.357006 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:48:00 crc kubenswrapper[4693]: E1212 15:48:00.357162 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.359178 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.359262 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.359349 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.359386 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.359404 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:00Z","lastTransitionTime":"2025-12-12T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.462098 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.462141 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.462149 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.462167 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.462179 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:00Z","lastTransitionTime":"2025-12-12T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.564561 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.564599 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.564607 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.564620 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.564641 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:00Z","lastTransitionTime":"2025-12-12T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.667325 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.667378 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.667394 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.667417 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.667436 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:00Z","lastTransitionTime":"2025-12-12T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.770344 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.770403 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.770415 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.770431 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.770441 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:00Z","lastTransitionTime":"2025-12-12T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.873073 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.873106 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.873117 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.873132 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.873142 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:00Z","lastTransitionTime":"2025-12-12T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.975947 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.975984 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.975996 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.976010 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:00 crc kubenswrapper[4693]: I1212 15:48:00.976019 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:00Z","lastTransitionTime":"2025-12-12T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.079182 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.079240 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.079262 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.079345 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.079367 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:01Z","lastTransitionTime":"2025-12-12T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.182008 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.182058 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.182074 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.182099 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.182117 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:01Z","lastTransitionTime":"2025-12-12T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.285335 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.285427 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.285453 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.285486 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.285509 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:01Z","lastTransitionTime":"2025-12-12T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.356190 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:01 crc kubenswrapper[4693]: E1212 15:48:01.356414 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.388526 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.388605 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.388631 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.388662 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.388685 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:01Z","lastTransitionTime":"2025-12-12T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.490658 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.490718 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.490736 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.490758 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.490777 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:01Z","lastTransitionTime":"2025-12-12T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.592733 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.592785 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.592797 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.592813 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.592825 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:01Z","lastTransitionTime":"2025-12-12T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.695669 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.695714 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.695734 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.695756 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.695767 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:01Z","lastTransitionTime":"2025-12-12T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.798174 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.798203 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.798211 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.798223 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.798232 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:01Z","lastTransitionTime":"2025-12-12T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.901514 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.901565 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.901574 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.901591 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:01 crc kubenswrapper[4693]: I1212 15:48:01.901600 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:01Z","lastTransitionTime":"2025-12-12T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.004115 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.004182 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.004204 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.004226 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.004240 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:02Z","lastTransitionTime":"2025-12-12T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.107113 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.107140 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.107147 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.107161 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.107169 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:02Z","lastTransitionTime":"2025-12-12T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.209725 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.209765 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.209775 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.209791 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.209802 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:02Z","lastTransitionTime":"2025-12-12T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.312213 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.312255 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.312288 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.312307 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.312322 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:02Z","lastTransitionTime":"2025-12-12T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.356742 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.356827 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:02 crc kubenswrapper[4693]: E1212 15:48:02.356868 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.356931 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:02 crc kubenswrapper[4693]: E1212 15:48:02.357008 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:48:02 crc kubenswrapper[4693]: E1212 15:48:02.357072 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.414571 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.414602 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.414610 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.414623 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.414631 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:02Z","lastTransitionTime":"2025-12-12T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.516952 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.517014 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.517022 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.517036 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.517045 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:02Z","lastTransitionTime":"2025-12-12T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.619007 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.619044 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.619054 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.619068 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.619079 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:02Z","lastTransitionTime":"2025-12-12T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.722303 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.722358 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.722369 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.722385 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.722396 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:02Z","lastTransitionTime":"2025-12-12T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.824841 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.824875 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.824883 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.824896 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.824905 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:02Z","lastTransitionTime":"2025-12-12T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.927860 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.927907 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.927918 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.927934 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:02 crc kubenswrapper[4693]: I1212 15:48:02.927947 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:02Z","lastTransitionTime":"2025-12-12T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.030460 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.030506 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.030515 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.030528 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.030536 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:03Z","lastTransitionTime":"2025-12-12T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.133364 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.133440 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.133472 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.133501 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.133521 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:03Z","lastTransitionTime":"2025-12-12T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.236432 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.236501 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.236512 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.236527 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.236539 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:03Z","lastTransitionTime":"2025-12-12T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.339619 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.339687 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.339711 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.339741 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.339839 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:03Z","lastTransitionTime":"2025-12-12T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.356142 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:03 crc kubenswrapper[4693]: E1212 15:48:03.356436 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.386059 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08221ae4-3d15-4ff7-825f-cc2ce2b72537\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcec8f0c1c45bdf87fbd59304e0059ebc71ad896e88f3033611e2179259226e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e63cb5d27ac7c233ff4d15cd75532081dd0a4da7c8cb027bf2d500952e0711e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71bf63aa4388b0b929872aed61fe7eb400fa636b9e479395331e3ed433b2ad79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e74fc49bf4c47ad5e84f055d0a28da0a1a77c4aead41edab8df49991ff250fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b8a94d6e3115a3afb2daec3d094b3b600e283e93c7f601999eebc5c5543db39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b1b97779c4ee45d0ecc02bd4fddf2ca83c945878e2dff9464b4141686b35fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b1b97779c4ee45d0ecc02bd4fddf2ca83c945878e2dff9464b4141686b35fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://765f250f611d130fc50b8e55150a897a3883f81556a7ba929f6dadb35c352dc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://765f250f611d130fc50b8e55150a897a3883f81556a7ba929f6dadb35c352dc0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de183a390733f9a095b9f0ddb181c9e04a8092d555b74ffc3b3d91b48b3c3b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de183a390733f9a095b9f0ddb181c9e04a8092d555b74ffc3b3d91b48b3c3b10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.405036 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1212 15:46:34.561316 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 15:46:34.561531 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 15:46:34.562488 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2797746132/tls.crt::/tmp/serving-cert-2797746132/tls.key\\\\\\\"\\\\nI1212 15:46:35.003439 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 15:46:35.005399 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 15:46:35.005419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 15:46:35.005446 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 15:46:35.005452 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 15:46:35.010123 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 15:46:35.010146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 15:46:35.010156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1212 15:46:35.010155 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1212 15:46:35.010160 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 15:46:35.010165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 15:46:35.010168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 15:46:35.010170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1212 15:46:35.011902 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.406356 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.406389 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.406399 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.406435 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.406446 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:03Z","lastTransitionTime":"2025-12-12T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.418867 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d313f6c39b2eca0fa8c75cb82cc0ad7d561da7a0b76638676eba46233581a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebde3bff1b8b7001982165338c8123e1be92b6bc53b37742ae883a9ee97f8642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: E1212 15:48:03.421688 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.427547 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.427572 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.427580 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.427593 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.427601 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:03Z","lastTransitionTime":"2025-12-12T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.433993 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71d6bb6b-1211-4bbd-8946-2010438d6a5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f82e2d5ecd6dee87e04f991776c6111aaecc3191c68a0e659130ad24a296dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh2lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wvw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: E1212 15:48:03.440259 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.445574 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.445621 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.445633 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.445670 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.445683 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:03Z","lastTransitionTime":"2025-12-12T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.449798 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sllz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54028d7-cdbb-4fa9-92cd-9570edacb888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dcd0e248c19f95611ffa8d0a665c032dff039d82f9b088c437e486136574fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:23Z\\\",\\\"message\\\":\\\"2025-12-12T15:46:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_831e5a88-9ce2-4c06-acff-ffdc61ed87eb\\\\n2025-12-12T15:46:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_831e5a88-9ce2-4c06-acff-ffdc61ed87eb to /host/opt/cni/bin/\\\\n2025-12-12T15:46:37Z [verbose] multus-daemon started\\\\n2025-12-12T15:46:37Z [verbose] Readiness Indicator file check\\\\n2025-12-12T15:47:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zk9xs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sllz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: E1212 15:48:03.458785 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.463375 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.463423 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.463433 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.463447 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.463457 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:03Z","lastTransitionTime":"2025-12-12T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.473701 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa7eae7d-b662-434d-96c1-de3080d579bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15049e5d253208466f13edd4c70b412f962d59285671ce1b0e0d86b8088e7147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15049e5d253208466f13edd4c70b412f962d59285671ce1b0e0d86b8088e7147\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T15:47:42Z\\\",\\\"message\\\":\\\"s.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1212 15:47:42.192885 6881 services_controller.go:360] Finished syncing service network-metrics-service on namespace openshift-multus for network=default : 20.271µs\\\\nI1212 15:47:42.192902 6881 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/olm-operator-metrics for network=default\\\\nI1212 15:47:42.192788 6881 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1212 15:47:42.192926 6881 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T15:47:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwpht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ps9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: E1212 15:48:03.478798 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.483575 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.483634 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.483644 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.483680 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.483693 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:03Z","lastTransitionTime":"2025-12-12T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.486769 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpnjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e063858d-709e-46eb-ab3a-c71ffd012b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4edade5e88a0d85b9f04c08b507097880f966003b8bb10546b177ba59d234fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99qql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpnjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: E1212 15:48:03.497807 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T15:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"06cc8039-d4d0-428c-b1fb-d3ae486da4dd\\\",\\\"systemUUID\\\":\\\"7f31af20-0471-4822-ac00-478aed93de06\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: E1212 15:48:03.498025 4693 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.500910 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d48451-cf76-4e73-9c94-fdca0d4b8ac9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e20a4d551c66abdf743446b08102a3d00fca62962c177b235f47f03aee8beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0957e8a25746dcf6488e55396a1b61d2bd7f3b04715a2c62673dace9c23815f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a0bf46bb066d2f6705a422a9c2da684fd43adb187867903a43858789313304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.502483 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.502556 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.502570 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.502839 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.502864 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:03Z","lastTransitionTime":"2025-12-12T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.513177 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e4586ed-cc1b-4024-a4a7-aa0431052bad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23217ef6881b3e63efba7e3f80279f3a3a967f82adaaaee3ce1235a1164e2f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138206b8b174ebead583b6953999e7e3f8699191291ba8635a106d8ed56efbb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://138206b8b174ebead583b6953999e7e3f8699191291ba8635a106d8ed56efbb5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.530878 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f15f5abb5b2345690d7af5a94c2c6dbef87240bfc68e3cbda3de1d3721aa21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.546316 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.562755 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6efc9d0-9c03-4235-ab59-96263c372e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2e726f8894f9687ebd38057eff29f8d31ee7c551c97580a52cf27bf0d69a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23a7d3167616b467ab74680bfa010784bb234da900db8445dec95ff29cfff2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94fac1f7dd297fedda4929855f51c134d176fe8f4ec7cd0f31828d4bd5c06c70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://922a501396fa76f9de9098db670e44623250b56b971c8ef221bab4c2431cfaa3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccf358e1acbf99c0b6404e7ccd2eed3d80493b45b25e0e665d451b2b0fcb68c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cde82925275a2a974f4c858d7780b9b42d9cd19d5d05c882caf3775e48a44f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66b34906ee8a8ea1e7dbf151873d3e72c38f8e2c7ef88482e16577b90528de2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h672\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gvtgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.613562 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.613601 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.613611 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.613626 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.613635 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:03Z","lastTransitionTime":"2025-12-12T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.615530 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0124f2-8890-495e-919d-da02af9ecd6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6ee772252ca6daf992f916cf2f4fba993106d436c8a192a37b1cf81080c5342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ad52957967efb3497de12a094e81ca9ffc7fc6fb88705e9d16ac22319711e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hsj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bjdt6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.629406 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46c78f0b-1016-48e4-b183-e70a6e692146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9141897abf18bfa9aa4d537e0e117efd7eeb1137e4f4eb0aeb4d68ed07430ff1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc30784ce0860622be7856d80caddb1a7f8c510518a0d7dc647eba7bb3671c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36b5280d53c4c3a10ab04273c8f2c02d7118b49f7bcf33eaada7891585e396d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd1212bf08bcad53d80c8f18baf905aef3b1370861abde1943366246cf0a00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T15:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T15:46:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.642052 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f67d936358d15fef8e1ce849347253b6c8fb63e491d35cc19c4a405902c051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.652695 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.662384 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nth2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20c9fcf7-c537-47fe-9699-bc3d411dd964\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab0ae83342fcaff5f505ac341c7aeb42a02131a603f3a7d8f7499bf36140f915\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nth2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.677137 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.686035 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w4zs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef3804b-c2b3-4645-b60f-9bc977a89f69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T15:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T15:46:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w4zs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T15:48:03Z is after 2025-08-24T17:21:41Z" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.716951 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.717380 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.717399 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.717422 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.717439 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:03Z","lastTransitionTime":"2025-12-12T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.822776 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.822851 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.822864 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.822883 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.822894 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:03Z","lastTransitionTime":"2025-12-12T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.925355 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.925383 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.925391 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.925406 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:03 crc kubenswrapper[4693]: I1212 15:48:03.925413 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:03Z","lastTransitionTime":"2025-12-12T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.027509 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.027554 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.027564 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.027580 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.027589 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:04Z","lastTransitionTime":"2025-12-12T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.130082 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.130116 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.130128 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.130143 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.130155 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:04Z","lastTransitionTime":"2025-12-12T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.232593 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.232666 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.232689 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.232719 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.232744 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:04Z","lastTransitionTime":"2025-12-12T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.335124 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.335186 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.335205 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.335228 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.335246 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:04Z","lastTransitionTime":"2025-12-12T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.356853 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.357008 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:04 crc kubenswrapper[4693]: E1212 15:48:04.357188 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.357232 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:04 crc kubenswrapper[4693]: E1212 15:48:04.357328 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:48:04 crc kubenswrapper[4693]: E1212 15:48:04.357403 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.438713 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.438815 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.438834 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.438856 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.438877 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:04Z","lastTransitionTime":"2025-12-12T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.541759 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.541814 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.541830 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.541854 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.541871 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:04Z","lastTransitionTime":"2025-12-12T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.644306 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.644383 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.644400 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.644435 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.644452 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:04Z","lastTransitionTime":"2025-12-12T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.746637 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.746688 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.746699 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.746755 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.746771 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:04Z","lastTransitionTime":"2025-12-12T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.849540 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.849610 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.849633 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.849662 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.849685 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:04Z","lastTransitionTime":"2025-12-12T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.952060 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.952111 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.952128 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.952148 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:04 crc kubenswrapper[4693]: I1212 15:48:04.952163 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:04Z","lastTransitionTime":"2025-12-12T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.054621 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.054656 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.054668 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.054684 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.054695 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:05Z","lastTransitionTime":"2025-12-12T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.157167 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.157222 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.157234 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.157250 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.157261 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:05Z","lastTransitionTime":"2025-12-12T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.261054 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.261703 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.261716 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.261921 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.261934 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:05Z","lastTransitionTime":"2025-12-12T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.356516 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:05 crc kubenswrapper[4693]: E1212 15:48:05.356672 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.364252 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.364314 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.364327 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.364342 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.364355 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:05Z","lastTransitionTime":"2025-12-12T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.466746 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.466774 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.466782 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.466794 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.466803 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:05Z","lastTransitionTime":"2025-12-12T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.569589 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.569625 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.569633 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.569647 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.569656 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:05Z","lastTransitionTime":"2025-12-12T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.673488 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.673527 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.673538 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.673555 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.673567 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:05Z","lastTransitionTime":"2025-12-12T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.775588 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.775669 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.775699 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.775728 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.775747 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:05Z","lastTransitionTime":"2025-12-12T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.878201 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.878238 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.878264 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.878290 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.878299 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:05Z","lastTransitionTime":"2025-12-12T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.980883 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.980941 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.980958 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.980981 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:05 crc kubenswrapper[4693]: I1212 15:48:05.980997 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:05Z","lastTransitionTime":"2025-12-12T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.083678 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.083743 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.083761 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.083785 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.083803 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:06Z","lastTransitionTime":"2025-12-12T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.186380 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.186443 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.186462 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.186490 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.186508 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:06Z","lastTransitionTime":"2025-12-12T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.289093 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.289156 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.289180 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.289210 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.289231 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:06Z","lastTransitionTime":"2025-12-12T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.356562 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.356628 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:06 crc kubenswrapper[4693]: E1212 15:48:06.356720 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.356735 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:06 crc kubenswrapper[4693]: E1212 15:48:06.356862 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:48:06 crc kubenswrapper[4693]: E1212 15:48:06.356903 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.391955 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.391989 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.391997 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.392010 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.392018 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:06Z","lastTransitionTime":"2025-12-12T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.494532 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.494562 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.494569 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.494581 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.494590 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:06Z","lastTransitionTime":"2025-12-12T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.597239 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.597295 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.597308 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.597325 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.597336 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:06Z","lastTransitionTime":"2025-12-12T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.699529 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.699566 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.699577 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.699593 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.699605 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:06Z","lastTransitionTime":"2025-12-12T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.802622 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.802679 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.802696 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.802720 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.802737 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:06Z","lastTransitionTime":"2025-12-12T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.905711 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.905761 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.905777 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.905799 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:06 crc kubenswrapper[4693]: I1212 15:48:06.905816 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:06Z","lastTransitionTime":"2025-12-12T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.008396 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.008442 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.008482 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.008500 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.008512 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:07Z","lastTransitionTime":"2025-12-12T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.111832 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.111880 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.111897 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.111919 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.111935 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:07Z","lastTransitionTime":"2025-12-12T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.214186 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.214216 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.214227 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.214244 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.214255 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:07Z","lastTransitionTime":"2025-12-12T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.317430 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.317469 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.317480 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.317496 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.317509 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:07Z","lastTransitionTime":"2025-12-12T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.356261 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:07 crc kubenswrapper[4693]: E1212 15:48:07.356449 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.357705 4693 scope.go:117] "RemoveContainer" containerID="15049e5d253208466f13edd4c70b412f962d59285671ce1b0e0d86b8088e7147" Dec 12 15:48:07 crc kubenswrapper[4693]: E1212 15:48:07.358020 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.419672 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.419749 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.419791 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.419823 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.419848 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:07Z","lastTransitionTime":"2025-12-12T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.522778 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.522832 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.522844 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.522862 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.522874 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:07Z","lastTransitionTime":"2025-12-12T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.625567 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.625614 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.625625 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.625640 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.625650 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:07Z","lastTransitionTime":"2025-12-12T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.728475 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.728790 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.728871 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.728998 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.729064 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:07Z","lastTransitionTime":"2025-12-12T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.832116 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.832523 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.832621 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.832733 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.832825 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:07Z","lastTransitionTime":"2025-12-12T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.934691 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.935262 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.935389 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.935463 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:07 crc kubenswrapper[4693]: I1212 15:48:07.935550 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:07Z","lastTransitionTime":"2025-12-12T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.038958 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.038986 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.038993 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.039004 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.039012 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:08Z","lastTransitionTime":"2025-12-12T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.141072 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.141112 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.141120 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.141133 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.141142 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:08Z","lastTransitionTime":"2025-12-12T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.243507 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.243565 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.243582 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.243603 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.243623 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:08Z","lastTransitionTime":"2025-12-12T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.350871 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.350917 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.350930 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.350947 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.350959 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:08Z","lastTransitionTime":"2025-12-12T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.356135 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.356141 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:08 crc kubenswrapper[4693]: E1212 15:48:08.356337 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:48:08 crc kubenswrapper[4693]: E1212 15:48:08.356459 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.356700 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:08 crc kubenswrapper[4693]: E1212 15:48:08.356841 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.453758 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.453842 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.453865 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.453894 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.453919 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:08Z","lastTransitionTime":"2025-12-12T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.557114 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.557179 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.557198 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.557224 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.557241 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:08Z","lastTransitionTime":"2025-12-12T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.661443 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.661498 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.661509 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.661526 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.661542 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:08Z","lastTransitionTime":"2025-12-12T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.764254 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.764349 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.764366 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.764388 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.764400 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:08Z","lastTransitionTime":"2025-12-12T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.866858 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.866889 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.866897 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.866910 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.866918 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:08Z","lastTransitionTime":"2025-12-12T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.969646 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.969726 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.969749 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.969775 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:08 crc kubenswrapper[4693]: I1212 15:48:08.969796 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:08Z","lastTransitionTime":"2025-12-12T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.073095 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.073153 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.073170 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.073202 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.073224 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:09Z","lastTransitionTime":"2025-12-12T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.176378 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.176450 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.176467 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.176491 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.176508 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:09Z","lastTransitionTime":"2025-12-12T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.279124 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.279514 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.279729 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.280014 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.280254 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:09Z","lastTransitionTime":"2025-12-12T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.356287 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:09 crc kubenswrapper[4693]: E1212 15:48:09.356420 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.383237 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.383298 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.383307 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.383322 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.383332 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:09Z","lastTransitionTime":"2025-12-12T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.486208 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.486259 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.486309 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.486334 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.486348 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:09Z","lastTransitionTime":"2025-12-12T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.588897 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.588942 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.588952 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.588965 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.588976 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:09Z","lastTransitionTime":"2025-12-12T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.692803 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.693100 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.693230 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.693365 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.693480 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:09Z","lastTransitionTime":"2025-12-12T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.796267 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.796390 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.796414 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.796438 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.796468 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:09Z","lastTransitionTime":"2025-12-12T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.900032 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.900398 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.900496 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.900618 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:09 crc kubenswrapper[4693]: I1212 15:48:09.900704 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:09Z","lastTransitionTime":"2025-12-12T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.002436 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.003082 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.003198 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.003308 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.003411 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:10Z","lastTransitionTime":"2025-12-12T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.106651 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.106691 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.106703 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.106720 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.106731 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:10Z","lastTransitionTime":"2025-12-12T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.209602 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.209641 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.209652 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.209666 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.209675 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:10Z","lastTransitionTime":"2025-12-12T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.311345 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.311372 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.311379 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.311391 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.311399 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:10Z","lastTransitionTime":"2025-12-12T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.356436 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.356552 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.356552 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:10 crc kubenswrapper[4693]: E1212 15:48:10.356994 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:48:10 crc kubenswrapper[4693]: E1212 15:48:10.357044 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:48:10 crc kubenswrapper[4693]: E1212 15:48:10.357099 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.413646 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.413680 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.413688 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.413701 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.413710 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:10Z","lastTransitionTime":"2025-12-12T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.515658 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.515691 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.515700 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.515712 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.515722 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:10Z","lastTransitionTime":"2025-12-12T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.618871 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.619329 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.619487 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.619650 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.619823 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:10Z","lastTransitionTime":"2025-12-12T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.722454 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.722506 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.722522 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.722543 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.722558 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:10Z","lastTransitionTime":"2025-12-12T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.824799 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.824832 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.824845 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.825094 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.825105 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:10Z","lastTransitionTime":"2025-12-12T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.927300 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.927343 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.927353 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.927371 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.927382 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:10Z","lastTransitionTime":"2025-12-12T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.974064 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sllz5_e54028d7-cdbb-4fa9-92cd-9570edacb888/kube-multus/1.log" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.974468 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sllz5_e54028d7-cdbb-4fa9-92cd-9570edacb888/kube-multus/0.log" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.974514 4693 generic.go:334] "Generic (PLEG): container finished" podID="e54028d7-cdbb-4fa9-92cd-9570edacb888" containerID="3dcd0e248c19f95611ffa8d0a665c032dff039d82f9b088c437e486136574fce" exitCode=1 Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.974547 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sllz5" event={"ID":"e54028d7-cdbb-4fa9-92cd-9570edacb888","Type":"ContainerDied","Data":"3dcd0e248c19f95611ffa8d0a665c032dff039d82f9b088c437e486136574fce"} Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.974582 4693 scope.go:117] "RemoveContainer" containerID="44c4c7f71b73fe92a034fe2c30310997e7e7442da252e82cc10dcad536061fcc" Dec 12 15:48:10 crc kubenswrapper[4693]: I1212 15:48:10.974953 4693 scope.go:117] "RemoveContainer" containerID="3dcd0e248c19f95611ffa8d0a665c032dff039d82f9b088c437e486136574fce" Dec 12 15:48:10 crc kubenswrapper[4693]: E1212 15:48:10.975121 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-sllz5_openshift-multus(e54028d7-cdbb-4fa9-92cd-9570edacb888)\"" pod="openshift-multus/multus-sllz5" podUID="e54028d7-cdbb-4fa9-92cd-9570edacb888" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.012523 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=96.0125095 podStartE2EDuration="1m36.0125095s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:10.998361798 +0000 UTC m=+118.167001439" watchObservedRunningTime="2025-12-12 15:48:11.0125095 +0000 UTC m=+118.181149101" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.029173 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.029210 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.029218 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.029232 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.029240 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:11Z","lastTransitionTime":"2025-12-12T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.044755 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=39.044672903 podStartE2EDuration="39.044672903s" podCreationTimestamp="2025-12-12 15:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:11.044081388 +0000 UTC m=+118.212720999" watchObservedRunningTime="2025-12-12 15:48:11.044672903 +0000 UTC m=+118.213312514" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.055228 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=43.055208193 podStartE2EDuration="43.055208193s" podCreationTimestamp="2025-12-12 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:11.054631328 +0000 UTC m=+118.223270939" watchObservedRunningTime="2025-12-12 15:48:11.055208193 +0000 UTC m=+118.223847804" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.092056 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podStartSLOduration=97.091998324 podStartE2EDuration="1m37.091998324s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:11.091026729 +0000 UTC m=+118.259666340" watchObservedRunningTime="2025-12-12 15:48:11.091998324 +0000 UTC m=+118.260637975" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.131530 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.131573 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.131583 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.131597 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.131607 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:11Z","lastTransitionTime":"2025-12-12T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.140849 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-fpnjv" podStartSLOduration=97.140829713 podStartE2EDuration="1m37.140829713s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:11.140398632 +0000 UTC m=+118.309038243" watchObservedRunningTime="2025-12-12 15:48:11.140829713 +0000 UTC m=+118.309469304" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.155425 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=96.155406526 podStartE2EDuration="1m36.155406526s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:11.155199291 +0000 UTC m=+118.323838932" watchObservedRunningTime="2025-12-12 15:48:11.155406526 +0000 UTC m=+118.324046127" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.214692 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-nth2b" podStartSLOduration=97.214669972 podStartE2EDuration="1m37.214669972s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:11.197901113 +0000 UTC m=+118.366540714" watchObservedRunningTime="2025-12-12 15:48:11.214669972 +0000 UTC m=+118.383309583" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.215119 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-gvtgv" podStartSLOduration=97.215111954 podStartE2EDuration="1m37.215111954s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:11.214612601 +0000 UTC m=+118.383252202" watchObservedRunningTime="2025-12-12 15:48:11.215111954 +0000 UTC m=+118.383751565" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.227679 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bjdt6" podStartSLOduration=96.227662345 podStartE2EDuration="1m36.227662345s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:11.226345301 +0000 UTC m=+118.394984922" watchObservedRunningTime="2025-12-12 15:48:11.227662345 +0000 UTC m=+118.396301946" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.233839 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.233876 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.233885 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.233898 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.233907 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:11Z","lastTransitionTime":"2025-12-12T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.252103 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=61.252084919 podStartE2EDuration="1m1.252084919s" podCreationTimestamp="2025-12-12 15:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:11.24154023 +0000 UTC m=+118.410179841" watchObservedRunningTime="2025-12-12 15:48:11.252084919 +0000 UTC m=+118.420724520" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.336333 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.336374 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.336386 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.336401 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.336411 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:11Z","lastTransitionTime":"2025-12-12T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.356767 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:11 crc kubenswrapper[4693]: E1212 15:48:11.356949 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.438440 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.438477 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.438485 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.438498 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.438507 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:11Z","lastTransitionTime":"2025-12-12T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.541333 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.541410 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.541428 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.541452 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.541489 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:11Z","lastTransitionTime":"2025-12-12T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.644215 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.644253 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.644263 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.644293 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.644301 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:11Z","lastTransitionTime":"2025-12-12T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.746900 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.746950 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.746972 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.746999 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.747019 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:11Z","lastTransitionTime":"2025-12-12T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.849706 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.849747 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.849757 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.849774 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.849784 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:11Z","lastTransitionTime":"2025-12-12T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.952250 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.952380 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.952401 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.952425 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.952451 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:11Z","lastTransitionTime":"2025-12-12T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:11 crc kubenswrapper[4693]: I1212 15:48:11.980383 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sllz5_e54028d7-cdbb-4fa9-92cd-9570edacb888/kube-multus/1.log" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.054204 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.054253 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.054292 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.054345 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.054368 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:12Z","lastTransitionTime":"2025-12-12T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.157683 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.157730 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.157748 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.157769 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.157787 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:12Z","lastTransitionTime":"2025-12-12T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.281983 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.282051 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.282069 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.282095 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.282113 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:12Z","lastTransitionTime":"2025-12-12T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.356754 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.356841 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:12 crc kubenswrapper[4693]: E1212 15:48:12.356908 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.356967 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:12 crc kubenswrapper[4693]: E1212 15:48:12.357085 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:48:12 crc kubenswrapper[4693]: E1212 15:48:12.357147 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.384932 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.385036 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.385062 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.385091 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.385106 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:12Z","lastTransitionTime":"2025-12-12T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.487194 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.487256 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.487287 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.487304 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.487314 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:12Z","lastTransitionTime":"2025-12-12T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.590115 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.590162 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.590171 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.590182 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.590190 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:12Z","lastTransitionTime":"2025-12-12T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.693099 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.693167 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.693178 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.693191 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.693200 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:12Z","lastTransitionTime":"2025-12-12T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.795509 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.795597 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.795613 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.795659 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.795671 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:12Z","lastTransitionTime":"2025-12-12T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.898386 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.898745 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.898875 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.898989 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:12 crc kubenswrapper[4693]: I1212 15:48:12.899078 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:12Z","lastTransitionTime":"2025-12-12T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.002929 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.003332 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.003503 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.003623 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.003719 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:13Z","lastTransitionTime":"2025-12-12T15:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.107122 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.107802 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.107908 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.108022 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.108114 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:13Z","lastTransitionTime":"2025-12-12T15:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:13 crc kubenswrapper[4693]: E1212 15:48:13.208845 4693 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.356238 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:13 crc kubenswrapper[4693]: E1212 15:48:13.357576 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:48:13 crc kubenswrapper[4693]: E1212 15:48:13.494629 4693 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.647583 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.647657 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.647671 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.647687 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.647699 4693 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T15:48:13Z","lastTransitionTime":"2025-12-12T15:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.699906 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8fmh"] Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.700331 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8fmh" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.702029 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.702409 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.702823 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.702823 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.795337 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9695a267-bc6c-48cc-8581-de3fea8dfcf1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-r8fmh\" (UID: \"9695a267-bc6c-48cc-8581-de3fea8dfcf1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8fmh" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.795379 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9695a267-bc6c-48cc-8581-de3fea8dfcf1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-r8fmh\" (UID: \"9695a267-bc6c-48cc-8581-de3fea8dfcf1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8fmh" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.795402 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9695a267-bc6c-48cc-8581-de3fea8dfcf1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-r8fmh\" (UID: \"9695a267-bc6c-48cc-8581-de3fea8dfcf1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8fmh" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.795594 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9695a267-bc6c-48cc-8581-de3fea8dfcf1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-r8fmh\" (UID: \"9695a267-bc6c-48cc-8581-de3fea8dfcf1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8fmh" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.795665 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9695a267-bc6c-48cc-8581-de3fea8dfcf1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-r8fmh\" (UID: \"9695a267-bc6c-48cc-8581-de3fea8dfcf1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8fmh" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.897173 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9695a267-bc6c-48cc-8581-de3fea8dfcf1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-r8fmh\" (UID: \"9695a267-bc6c-48cc-8581-de3fea8dfcf1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8fmh" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.897231 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9695a267-bc6c-48cc-8581-de3fea8dfcf1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-r8fmh\" (UID: \"9695a267-bc6c-48cc-8581-de3fea8dfcf1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8fmh" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.897258 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9695a267-bc6c-48cc-8581-de3fea8dfcf1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-r8fmh\" (UID: \"9695a267-bc6c-48cc-8581-de3fea8dfcf1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8fmh" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.897319 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9695a267-bc6c-48cc-8581-de3fea8dfcf1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-r8fmh\" (UID: \"9695a267-bc6c-48cc-8581-de3fea8dfcf1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8fmh" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.897334 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9695a267-bc6c-48cc-8581-de3fea8dfcf1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-r8fmh\" (UID: \"9695a267-bc6c-48cc-8581-de3fea8dfcf1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8fmh" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.897593 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9695a267-bc6c-48cc-8581-de3fea8dfcf1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-r8fmh\" (UID: \"9695a267-bc6c-48cc-8581-de3fea8dfcf1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8fmh" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.897700 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9695a267-bc6c-48cc-8581-de3fea8dfcf1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-r8fmh\" (UID: \"9695a267-bc6c-48cc-8581-de3fea8dfcf1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8fmh" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.898198 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9695a267-bc6c-48cc-8581-de3fea8dfcf1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-r8fmh\" (UID: \"9695a267-bc6c-48cc-8581-de3fea8dfcf1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8fmh" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.906323 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9695a267-bc6c-48cc-8581-de3fea8dfcf1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-r8fmh\" (UID: \"9695a267-bc6c-48cc-8581-de3fea8dfcf1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8fmh" Dec 12 15:48:13 crc kubenswrapper[4693]: I1212 15:48:13.924813 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9695a267-bc6c-48cc-8581-de3fea8dfcf1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-r8fmh\" (UID: \"9695a267-bc6c-48cc-8581-de3fea8dfcf1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8fmh" Dec 12 15:48:14 crc kubenswrapper[4693]: I1212 15:48:14.050216 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8fmh" Dec 12 15:48:14 crc kubenswrapper[4693]: I1212 15:48:14.356471 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:14 crc kubenswrapper[4693]: E1212 15:48:14.357613 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:48:14 crc kubenswrapper[4693]: I1212 15:48:14.356579 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:14 crc kubenswrapper[4693]: E1212 15:48:14.357853 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:48:14 crc kubenswrapper[4693]: I1212 15:48:14.356521 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:14 crc kubenswrapper[4693]: E1212 15:48:14.358069 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:48:14 crc kubenswrapper[4693]: I1212 15:48:14.991178 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8fmh" event={"ID":"9695a267-bc6c-48cc-8581-de3fea8dfcf1","Type":"ContainerStarted","Data":"57bc2589b4568db3459d251b4e205432518c089bd503b6aa55c9a67aafd3a9f1"} Dec 12 15:48:14 crc kubenswrapper[4693]: I1212 15:48:14.991514 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8fmh" event={"ID":"9695a267-bc6c-48cc-8581-de3fea8dfcf1","Type":"ContainerStarted","Data":"452d9c58f64749e45a8e70ad8f75fa4a2ad39b15529b6973d879a2f346ca26e5"} Dec 12 15:48:15 crc kubenswrapper[4693]: I1212 15:48:15.005820 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8fmh" podStartSLOduration=101.005791283 podStartE2EDuration="1m41.005791283s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:15.005743721 +0000 UTC m=+122.174383392" watchObservedRunningTime="2025-12-12 15:48:15.005791283 +0000 UTC m=+122.174430924" Dec 12 15:48:15 crc kubenswrapper[4693]: I1212 15:48:15.356547 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:15 crc kubenswrapper[4693]: E1212 15:48:15.356681 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:48:16 crc kubenswrapper[4693]: I1212 15:48:16.357136 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:16 crc kubenswrapper[4693]: I1212 15:48:16.357204 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:16 crc kubenswrapper[4693]: I1212 15:48:16.357242 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:16 crc kubenswrapper[4693]: E1212 15:48:16.358678 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:48:16 crc kubenswrapper[4693]: E1212 15:48:16.358851 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:48:16 crc kubenswrapper[4693]: E1212 15:48:16.359083 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:48:17 crc kubenswrapper[4693]: I1212 15:48:17.356391 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:17 crc kubenswrapper[4693]: E1212 15:48:17.356545 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:48:18 crc kubenswrapper[4693]: I1212 15:48:18.356242 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:18 crc kubenswrapper[4693]: I1212 15:48:18.356334 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:18 crc kubenswrapper[4693]: I1212 15:48:18.356344 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:18 crc kubenswrapper[4693]: E1212 15:48:18.356447 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:48:18 crc kubenswrapper[4693]: E1212 15:48:18.356735 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:48:18 crc kubenswrapper[4693]: E1212 15:48:18.357340 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:48:18 crc kubenswrapper[4693]: I1212 15:48:18.357905 4693 scope.go:117] "RemoveContainer" containerID="15049e5d253208466f13edd4c70b412f962d59285671ce1b0e0d86b8088e7147" Dec 12 15:48:18 crc kubenswrapper[4693]: E1212 15:48:18.358170 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ps9gt_openshift-ovn-kubernetes(fa7eae7d-b662-434d-96c1-de3080d579bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" Dec 12 15:48:18 crc kubenswrapper[4693]: E1212 15:48:18.496121 4693 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 12 15:48:19 crc kubenswrapper[4693]: I1212 15:48:19.356753 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:19 crc kubenswrapper[4693]: E1212 15:48:19.357141 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:48:20 crc kubenswrapper[4693]: I1212 15:48:20.356330 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:20 crc kubenswrapper[4693]: I1212 15:48:20.356406 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:20 crc kubenswrapper[4693]: I1212 15:48:20.356601 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:20 crc kubenswrapper[4693]: E1212 15:48:20.356707 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:48:20 crc kubenswrapper[4693]: E1212 15:48:20.356836 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:48:20 crc kubenswrapper[4693]: E1212 15:48:20.356960 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:48:21 crc kubenswrapper[4693]: I1212 15:48:21.356970 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:21 crc kubenswrapper[4693]: E1212 15:48:21.357165 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:48:22 crc kubenswrapper[4693]: I1212 15:48:22.356377 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:22 crc kubenswrapper[4693]: I1212 15:48:22.356482 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:22 crc kubenswrapper[4693]: I1212 15:48:22.356497 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:22 crc kubenswrapper[4693]: E1212 15:48:22.356575 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:48:22 crc kubenswrapper[4693]: E1212 15:48:22.356681 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:48:22 crc kubenswrapper[4693]: E1212 15:48:22.356829 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:48:23 crc kubenswrapper[4693]: I1212 15:48:23.356458 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:23 crc kubenswrapper[4693]: E1212 15:48:23.357530 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:48:23 crc kubenswrapper[4693]: E1212 15:48:23.496616 4693 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 12 15:48:24 crc kubenswrapper[4693]: I1212 15:48:24.356621 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:24 crc kubenswrapper[4693]: I1212 15:48:24.356666 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:24 crc kubenswrapper[4693]: I1212 15:48:24.356706 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:24 crc kubenswrapper[4693]: I1212 15:48:24.357105 4693 scope.go:117] "RemoveContainer" containerID="3dcd0e248c19f95611ffa8d0a665c032dff039d82f9b088c437e486136574fce" Dec 12 15:48:24 crc kubenswrapper[4693]: E1212 15:48:24.357621 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:48:24 crc kubenswrapper[4693]: E1212 15:48:24.357670 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:48:24 crc kubenswrapper[4693]: E1212 15:48:24.357742 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:48:25 crc kubenswrapper[4693]: I1212 15:48:25.027738 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sllz5_e54028d7-cdbb-4fa9-92cd-9570edacb888/kube-multus/1.log" Dec 12 15:48:25 crc kubenswrapper[4693]: I1212 15:48:25.027803 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sllz5" event={"ID":"e54028d7-cdbb-4fa9-92cd-9570edacb888","Type":"ContainerStarted","Data":"20f65b2d3a7013a476343e6940f753f3203dcd391cc6f30cd35076234e281395"} Dec 12 15:48:25 crc kubenswrapper[4693]: I1212 15:48:25.047122 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-sllz5" podStartSLOduration=111.047093264 podStartE2EDuration="1m51.047093264s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:25.04575156 +0000 UTC m=+132.214391181" watchObservedRunningTime="2025-12-12 15:48:25.047093264 +0000 UTC m=+132.215732895" Dec 12 15:48:25 crc kubenswrapper[4693]: I1212 15:48:25.356586 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:25 crc kubenswrapper[4693]: E1212 15:48:25.356793 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:48:26 crc kubenswrapper[4693]: I1212 15:48:26.356901 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:26 crc kubenswrapper[4693]: I1212 15:48:26.356972 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:26 crc kubenswrapper[4693]: E1212 15:48:26.357015 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:48:26 crc kubenswrapper[4693]: I1212 15:48:26.357147 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:26 crc kubenswrapper[4693]: E1212 15:48:26.357158 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:48:26 crc kubenswrapper[4693]: E1212 15:48:26.357194 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:48:27 crc kubenswrapper[4693]: I1212 15:48:27.356515 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:27 crc kubenswrapper[4693]: E1212 15:48:27.356929 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:48:28 crc kubenswrapper[4693]: I1212 15:48:28.356657 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:28 crc kubenswrapper[4693]: E1212 15:48:28.356800 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:48:28 crc kubenswrapper[4693]: I1212 15:48:28.356686 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:28 crc kubenswrapper[4693]: I1212 15:48:28.356691 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:28 crc kubenswrapper[4693]: E1212 15:48:28.356987 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:48:28 crc kubenswrapper[4693]: E1212 15:48:28.357183 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:48:28 crc kubenswrapper[4693]: E1212 15:48:28.497588 4693 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 12 15:48:29 crc kubenswrapper[4693]: I1212 15:48:29.357732 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:29 crc kubenswrapper[4693]: E1212 15:48:29.358003 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:48:30 crc kubenswrapper[4693]: I1212 15:48:30.356920 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:30 crc kubenswrapper[4693]: I1212 15:48:30.357019 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:30 crc kubenswrapper[4693]: E1212 15:48:30.357112 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:48:30 crc kubenswrapper[4693]: E1212 15:48:30.357938 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:48:30 crc kubenswrapper[4693]: I1212 15:48:30.358380 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:30 crc kubenswrapper[4693]: E1212 15:48:30.358578 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:48:31 crc kubenswrapper[4693]: I1212 15:48:31.356445 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:31 crc kubenswrapper[4693]: E1212 15:48:31.356701 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:48:32 crc kubenswrapper[4693]: I1212 15:48:32.357071 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:32 crc kubenswrapper[4693]: I1212 15:48:32.357182 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:32 crc kubenswrapper[4693]: I1212 15:48:32.357071 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:32 crc kubenswrapper[4693]: E1212 15:48:32.357246 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:48:32 crc kubenswrapper[4693]: E1212 15:48:32.357417 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:48:32 crc kubenswrapper[4693]: E1212 15:48:32.357560 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:48:32 crc kubenswrapper[4693]: I1212 15:48:32.359589 4693 scope.go:117] "RemoveContainer" containerID="15049e5d253208466f13edd4c70b412f962d59285671ce1b0e0d86b8088e7147" Dec 12 15:48:33 crc kubenswrapper[4693]: I1212 15:48:33.054012 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ps9gt_fa7eae7d-b662-434d-96c1-de3080d579bd/ovnkube-controller/3.log" Dec 12 15:48:33 crc kubenswrapper[4693]: I1212 15:48:33.057142 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerStarted","Data":"444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25"} Dec 12 15:48:33 crc kubenswrapper[4693]: I1212 15:48:33.057736 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:48:33 crc kubenswrapper[4693]: I1212 15:48:33.090134 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" podStartSLOduration=119.090108573 podStartE2EDuration="1m59.090108573s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:33.089485648 +0000 UTC m=+140.258125389" watchObservedRunningTime="2025-12-12 15:48:33.090108573 +0000 UTC m=+140.258748174" Dec 12 15:48:33 crc kubenswrapper[4693]: I1212 15:48:33.178032 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-w4zs6"] Dec 12 15:48:33 crc kubenswrapper[4693]: I1212 15:48:33.178215 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:33 crc kubenswrapper[4693]: E1212 15:48:33.178398 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:48:33 crc kubenswrapper[4693]: I1212 15:48:33.357661 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:33 crc kubenswrapper[4693]: E1212 15:48:33.359664 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:48:33 crc kubenswrapper[4693]: E1212 15:48:33.498007 4693 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 12 15:48:34 crc kubenswrapper[4693]: I1212 15:48:34.356868 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:34 crc kubenswrapper[4693]: E1212 15:48:34.357320 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:48:34 crc kubenswrapper[4693]: I1212 15:48:34.357538 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:34 crc kubenswrapper[4693]: I1212 15:48:34.357582 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:34 crc kubenswrapper[4693]: E1212 15:48:34.357698 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:48:34 crc kubenswrapper[4693]: E1212 15:48:34.357789 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:48:35 crc kubenswrapper[4693]: I1212 15:48:35.357137 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:35 crc kubenswrapper[4693]: E1212 15:48:35.357438 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:48:36 crc kubenswrapper[4693]: I1212 15:48:36.357074 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:36 crc kubenswrapper[4693]: I1212 15:48:36.357117 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:36 crc kubenswrapper[4693]: I1212 15:48:36.357073 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:36 crc kubenswrapper[4693]: E1212 15:48:36.357447 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:48:36 crc kubenswrapper[4693]: E1212 15:48:36.357227 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:48:36 crc kubenswrapper[4693]: E1212 15:48:36.357593 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:48:37 crc kubenswrapper[4693]: I1212 15:48:37.356744 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:37 crc kubenswrapper[4693]: E1212 15:48:37.356964 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 15:48:38 crc kubenswrapper[4693]: I1212 15:48:38.356475 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:38 crc kubenswrapper[4693]: I1212 15:48:38.356475 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:38 crc kubenswrapper[4693]: I1212 15:48:38.356627 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:38 crc kubenswrapper[4693]: E1212 15:48:38.356733 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 15:48:38 crc kubenswrapper[4693]: E1212 15:48:38.356897 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w4zs6" podUID="6ef3804b-c2b3-4645-b60f-9bc977a89f69" Dec 12 15:48:38 crc kubenswrapper[4693]: E1212 15:48:38.356959 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 15:48:39 crc kubenswrapper[4693]: I1212 15:48:39.356566 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:39 crc kubenswrapper[4693]: I1212 15:48:39.359531 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 12 15:48:39 crc kubenswrapper[4693]: I1212 15:48:39.359654 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 12 15:48:40 crc kubenswrapper[4693]: I1212 15:48:40.356516 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:40 crc kubenswrapper[4693]: I1212 15:48:40.356527 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:40 crc kubenswrapper[4693]: I1212 15:48:40.356531 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:40 crc kubenswrapper[4693]: I1212 15:48:40.359882 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 12 15:48:40 crc kubenswrapper[4693]: I1212 15:48:40.360362 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 12 15:48:40 crc kubenswrapper[4693]: I1212 15:48:40.360601 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 12 15:48:40 crc kubenswrapper[4693]: I1212 15:48:40.361976 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 12 15:48:42 crc kubenswrapper[4693]: I1212 15:48:42.957775 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:48:42 crc kubenswrapper[4693]: I1212 15:48:42.958080 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:48:43 crc kubenswrapper[4693]: I1212 15:48:43.132938 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:43 crc kubenswrapper[4693]: I1212 15:48:43.133032 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:43 crc kubenswrapper[4693]: I1212 15:48:43.133065 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:43 crc kubenswrapper[4693]: I1212 15:48:43.133096 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:43 crc kubenswrapper[4693]: E1212 15:48:43.133165 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:50:45.133134 +0000 UTC m=+272.301773671 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:43 crc kubenswrapper[4693]: I1212 15:48:43.133294 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:43 crc kubenswrapper[4693]: I1212 15:48:43.134211 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:43 crc kubenswrapper[4693]: I1212 15:48:43.138844 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:43 crc kubenswrapper[4693]: I1212 15:48:43.138858 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:43 crc kubenswrapper[4693]: I1212 15:48:43.142171 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:43 crc kubenswrapper[4693]: I1212 15:48:43.279761 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:43 crc kubenswrapper[4693]: I1212 15:48:43.379008 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 15:48:43 crc kubenswrapper[4693]: I1212 15:48:43.393656 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 15:48:43 crc kubenswrapper[4693]: W1212 15:48:43.578325 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-0322ddb02081469f9b35638bd28849909f4d1ce9eb5f50733112281c2f407150 WatchSource:0}: Error finding container 0322ddb02081469f9b35638bd28849909f4d1ce9eb5f50733112281c2f407150: Status 404 returned error can't find the container with id 0322ddb02081469f9b35638bd28849909f4d1ce9eb5f50733112281c2f407150 Dec 12 15:48:43 crc kubenswrapper[4693]: W1212 15:48:43.667216 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-f63b0b3d178b7ae3a32538bbd1ce69373faac1d97bf8d8df282b4c2981b0d139 WatchSource:0}: Error finding container f63b0b3d178b7ae3a32538bbd1ce69373faac1d97bf8d8df282b4c2981b0d139: Status 404 returned error can't find the container with id f63b0b3d178b7ae3a32538bbd1ce69373faac1d97bf8d8df282b4c2981b0d139 Dec 12 15:48:43 crc kubenswrapper[4693]: I1212 15:48:43.966728 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0c6c6f7ea7fd6cd334a603d7535f3b77cdf254bfab7126649d9d2f45d322e881"} Dec 12 15:48:43 crc kubenswrapper[4693]: I1212 15:48:43.966786 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0322ddb02081469f9b35638bd28849909f4d1ce9eb5f50733112281c2f407150"} Dec 12 15:48:43 crc kubenswrapper[4693]: I1212 15:48:43.969485 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8780d2d9fd309a668ba9da180dc994212032a3128bb3475292f81a54af4764c4"} Dec 12 15:48:43 crc kubenswrapper[4693]: I1212 15:48:43.969516 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f63b0b3d178b7ae3a32538bbd1ce69373faac1d97bf8d8df282b4c2981b0d139"} Dec 12 15:48:43 crc kubenswrapper[4693]: I1212 15:48:43.970855 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"3ece6d4a69f0f4f088a1cd1458a6de748b05000b8218f3b4215c167c2b51b02a"} Dec 12 15:48:43 crc kubenswrapper[4693]: I1212 15:48:43.970904 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4601b506d7c6618c75f01c9a81444d597b70ae242542b264ed950fc36a326979"} Dec 12 15:48:43 crc kubenswrapper[4693]: I1212 15:48:43.971086 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.462096 4693 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.514837 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.515496 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.516175 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-47c86"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.516804 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:44 crc kubenswrapper[4693]: W1212 15:48:44.521851 4693 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-config": failed to list *v1.ConfigMap: configmaps "machine-approver-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Dec 12 15:48:44 crc kubenswrapper[4693]: W1212 15:48:44.521905 4693 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Dec 12 15:48:44 crc kubenswrapper[4693]: E1212 15:48:44.521941 4693 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-approver-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 12 15:48:44 crc kubenswrapper[4693]: W1212 15:48:44.521877 4693 reflector.go:561] object-"openshift-image-registry"/"registry-dockercfg-kzzsd": failed to list *v1.Secret: secrets "registry-dockercfg-kzzsd" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Dec 12 15:48:44 crc kubenswrapper[4693]: E1212 15:48:44.521989 4693 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 12 15:48:44 crc kubenswrapper[4693]: E1212 15:48:44.522008 4693 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"registry-dockercfg-kzzsd\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"registry-dockercfg-kzzsd\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 12 15:48:44 crc kubenswrapper[4693]: W1212 15:48:44.522021 4693 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-tls": failed to list *v1.Secret: secrets "machine-approver-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Dec 12 15:48:44 crc kubenswrapper[4693]: E1212 15:48:44.522090 4693 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 12 15:48:44 crc kubenswrapper[4693]: W1212 15:48:44.523542 4693 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Dec 12 15:48:44 crc kubenswrapper[4693]: E1212 15:48:44.523578 4693 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.523787 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2dvf"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.525079 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.532131 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2dvf" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.534073 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.534366 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.538900 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 12 15:48:44 crc kubenswrapper[4693]: W1212 15:48:44.538964 4693 reflector.go:561] object-"openshift-image-registry"/"trusted-ca": failed to list *v1.ConfigMap: configmaps "trusted-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Dec 12 15:48:44 crc kubenswrapper[4693]: E1212 15:48:44.539018 4693 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.539039 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wp4hr"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.539957 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wp4hr" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.540773 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-n4mbr"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.541187 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4mbr" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.548007 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-29ww6"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.548570 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.548917 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.549198 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.549443 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.549718 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.549958 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.550251 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.550442 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.550611 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.551040 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.551311 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.551855 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.552355 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-29ww6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.558775 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.559575 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.559995 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nnqfp"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.560336 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nnqfp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.563209 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.564012 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pxqqx"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.564297 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-b7rfx"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.564543 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.565440 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxqqx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.565478 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.565779 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d28jp"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.569342 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xqnqh"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.569645 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.569839 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wm8d5"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.569993 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.571052 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.580916 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.581318 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.580917 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ks47b"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.581988 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ks47b" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.583005 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.583591 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.584063 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.584380 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.584658 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.585345 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.585468 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.585556 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.585673 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-88srp"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.586047 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.586204 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.586481 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.586104 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wz942"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.589765 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.595462 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.596191 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.598047 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.598187 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdcck"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.598669 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-m7l28"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.599073 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-m7l28" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.599335 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdcck" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.599604 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gr2s6"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.599727 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.599831 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.599920 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.599993 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.600062 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.600134 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.600182 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gr2s6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.600211 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.600291 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.600371 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.612562 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.613671 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-hfmz9"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.614255 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.617461 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.617464 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.617493 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.617505 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.617527 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.617540 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.617564 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.617571 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.636924 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.637401 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.637568 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.637874 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.637964 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.638074 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.638195 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.638206 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.639898 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fcvw7"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.638240 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.638290 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.638338 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.638366 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.638392 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.638399 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.637086 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.638433 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.638439 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.639000 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.639575 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.639618 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.639650 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.639724 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.639722 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.639760 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.639785 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.639816 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.640034 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.640120 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.640154 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.640197 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.640226 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.640257 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.640300 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.640406 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.640441 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.643004 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.643561 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fcvw7" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.643732 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k47hd"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.644453 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k47hd" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.650205 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.650790 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651148 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5vnt\" (UniqueName: \"kubernetes.io/projected/19166153-66f0-4f4f-8f4b-ef7af5a72770-kube-api-access-t5vnt\") pod \"machine-approver-56656f9798-x6zxm\" (UID: \"19166153-66f0-4f4f-8f4b-ef7af5a72770\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651169 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28dc9895-00d3-4e72-930a-ea9b0ca468c4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f2dvf\" (UID: \"28dc9895-00d3-4e72-930a-ea9b0ca468c4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2dvf" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651185 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f28a792f-4814-4a24-ab79-3a5b00adb25e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651202 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/53109c03-d846-4eaa-a01e-7aca23a720f6-proxy-tls\") pod \"machine-config-controller-84d6567774-n4mbr\" (UID: \"53109c03-d846-4eaa-a01e-7aca23a720f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4mbr" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651223 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5fzs\" (UniqueName: \"kubernetes.io/projected/53109c03-d846-4eaa-a01e-7aca23a720f6-kube-api-access-m5fzs\") pod \"machine-config-controller-84d6567774-n4mbr\" (UID: \"53109c03-d846-4eaa-a01e-7aca23a720f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4mbr" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651260 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19166153-66f0-4f4f-8f4b-ef7af5a72770-config\") pod \"machine-approver-56656f9798-x6zxm\" (UID: \"19166153-66f0-4f4f-8f4b-ef7af5a72770\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651293 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4321ba4d-fc67-4945-a86a-9b6f30ab66ce-metrics-tls\") pod \"dns-operator-744455d44c-wp4hr\" (UID: \"4321ba4d-fc67-4945-a86a-9b6f30ab66ce\") " pod="openshift-dns-operator/dns-operator-744455d44c-wp4hr" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651308 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f28a792f-4814-4a24-ab79-3a5b00adb25e-trusted-ca\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651323 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f28a792f-4814-4a24-ab79-3a5b00adb25e-registry-certificates\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651346 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651362 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/53109c03-d846-4eaa-a01e-7aca23a720f6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-n4mbr\" (UID: \"53109c03-d846-4eaa-a01e-7aca23a720f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4mbr" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651378 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/19166153-66f0-4f4f-8f4b-ef7af5a72770-auth-proxy-config\") pod \"machine-approver-56656f9798-x6zxm\" (UID: \"19166153-66f0-4f4f-8f4b-ef7af5a72770\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651393 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28dc9895-00d3-4e72-930a-ea9b0ca468c4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f2dvf\" (UID: \"28dc9895-00d3-4e72-930a-ea9b0ca468c4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2dvf" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651408 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f28a792f-4814-4a24-ab79-3a5b00adb25e-bound-sa-token\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651424 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv2vw\" (UniqueName: \"kubernetes.io/projected/f28a792f-4814-4a24-ab79-3a5b00adb25e-kube-api-access-qv2vw\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651442 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/19166153-66f0-4f4f-8f4b-ef7af5a72770-machine-approver-tls\") pod \"machine-approver-56656f9798-x6zxm\" (UID: \"19166153-66f0-4f4f-8f4b-ef7af5a72770\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651459 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfpm7\" (UniqueName: \"kubernetes.io/projected/4321ba4d-fc67-4945-a86a-9b6f30ab66ce-kube-api-access-rfpm7\") pod \"dns-operator-744455d44c-wp4hr\" (UID: \"4321ba4d-fc67-4945-a86a-9b6f30ab66ce\") " pod="openshift-dns-operator/dns-operator-744455d44c-wp4hr" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651474 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f28a792f-4814-4a24-ab79-3a5b00adb25e-registry-tls\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651487 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28dc9895-00d3-4e72-930a-ea9b0ca468c4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f2dvf\" (UID: \"28dc9895-00d3-4e72-930a-ea9b0ca468c4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2dvf" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651509 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f28a792f-4814-4a24-ab79-3a5b00adb25e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651648 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651765 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651864 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: E1212 15:48:44.651898 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:45.151880576 +0000 UTC m=+152.320520257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.651968 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.655313 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.655782 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.656535 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.656843 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.658942 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-bz9v2"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.670830 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.671182 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hr46r"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.671325 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bz9v2" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.671566 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658pn"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.671741 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hr46r" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.671901 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658pn" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.672564 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-spsbw"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.673805 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-spsbw" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.676396 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.676881 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.677483 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.680637 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.683389 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.683923 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.692176 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qp87x"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.692786 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-qp87x" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.693990 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wcl2w"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.694388 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.704824 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.707031 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.707909 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-npwzs"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.709061 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.710248 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-qd9z7"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.710335 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.710825 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-qd9z7" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.714123 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.714903 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.714923 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bwgbj"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.717660 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.719362 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pqr7h"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.719505 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bwgbj" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.719878 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-n4mbr"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.719933 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pqr7h" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.721119 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.721944 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.723264 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.724502 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-qnfsm"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.725708 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qnfsm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.730062 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-47c86"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.732881 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nnqfp"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.733329 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d28jp"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.734594 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pxqqx"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.737146 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-b7rfx"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.737642 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.741518 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-29ww6"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.742317 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-88srp"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.744595 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wz942"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.746981 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.748644 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2dvf"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754200 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754399 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754436 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/19166153-66f0-4f4f-8f4b-ef7af5a72770-auth-proxy-config\") pod \"machine-approver-56656f9798-x6zxm\" (UID: \"19166153-66f0-4f4f-8f4b-ef7af5a72770\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754465 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f59b453-c693-4382-b7f5-82d3c8ee48e9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-29ww6\" (UID: \"4f59b453-c693-4382-b7f5-82d3c8ee48e9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-29ww6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754489 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw27f\" (UniqueName: \"kubernetes.io/projected/49c82763-4d39-4424-8aa0-745158bd96c6-kube-api-access-kw27f\") pod \"router-default-5444994796-hfmz9\" (UID: \"49c82763-4d39-4424-8aa0-745158bd96c6\") " pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754512 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-client-ca\") pod \"controller-manager-879f6c89f-88srp\" (UID: \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754534 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/901e060a-e400-40cd-bd50-d8bfb7c5127a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ks47b\" (UID: \"901e060a-e400-40cd-bd50-d8bfb7c5127a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ks47b" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754549 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7037e0f8-e094-40a4-9188-7dc2fdd1b4a6-config\") pod \"machine-api-operator-5694c8668f-m7l28\" (UID: \"7037e0f8-e094-40a4-9188-7dc2fdd1b4a6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m7l28" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754566 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c9f6cb7-17aa-420e-a9b9-af42bd6e4caf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k47hd\" (UID: \"1c9f6cb7-17aa-420e-a9b9-af42bd6e4caf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k47hd" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754590 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754611 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hts9j\" (UniqueName: \"kubernetes.io/projected/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-kube-api-access-hts9j\") pod \"controller-manager-879f6c89f-88srp\" (UID: \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754625 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/31b7f38e-5f91-43bf-bba4-bc8592747704-node-pullsecrets\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754639 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/31b7f38e-5f91-43bf-bba4-bc8592747704-image-import-ca\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754664 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/94c146b4-f621-42ff-b0db-5e471b8938b6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754684 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31b7f38e-5f91-43bf-bba4-bc8592747704-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754703 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbt64\" (UniqueName: \"kubernetes.io/projected/9675a84b-88dc-4a3c-8fe9-070088ada9b1-kube-api-access-xbt64\") pod \"cluster-samples-operator-665b6dd947-fcvw7\" (UID: \"9675a84b-88dc-4a3c-8fe9-070088ada9b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fcvw7" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754719 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-serving-cert\") pod \"controller-manager-879f6c89f-88srp\" (UID: \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754732 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c536f6e-dc3f-407b-81bc-ad0febbae611-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qdcck\" (UID: \"4c536f6e-dc3f-407b-81bc-ad0febbae611\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdcck" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754751 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/38f31f08-dc12-4feb-8567-ab19705f0e16-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pxqqx\" (UID: \"38f31f08-dc12-4feb-8567-ab19705f0e16\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxqqx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754767 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49c82763-4d39-4424-8aa0-745158bd96c6-metrics-certs\") pod \"router-default-5444994796-hfmz9\" (UID: \"49c82763-4d39-4424-8aa0-745158bd96c6\") " pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754785 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s28nw\" (UniqueName: \"kubernetes.io/projected/6be45ad2-fe1c-4b29-8aa8-c5eec39978a3-kube-api-access-s28nw\") pod \"control-plane-machine-set-operator-78cbb6b69f-658pn\" (UID: \"6be45ad2-fe1c-4b29-8aa8-c5eec39978a3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658pn" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754806 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754826 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/94c146b4-f621-42ff-b0db-5e471b8938b6-encryption-config\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754846 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94c146b4-f621-42ff-b0db-5e471b8938b6-serving-cert\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754860 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8832f47a-79fe-4045-91d9-d42f21a2652f-config\") pod \"etcd-operator-b45778765-xqnqh\" (UID: \"8832f47a-79fe-4045-91d9-d42f21a2652f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754879 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2850071-0204-4052-b9f3-863243d3300b-trusted-ca\") pod \"ingress-operator-5b745b69d9-gr2s6\" (UID: \"c2850071-0204-4052-b9f3-863243d3300b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gr2s6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754903 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28dc9895-00d3-4e72-930a-ea9b0ca468c4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f2dvf\" (UID: \"28dc9895-00d3-4e72-930a-ea9b0ca468c4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2dvf" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754952 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94c146b4-f621-42ff-b0db-5e471b8938b6-audit-dir\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754973 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hhs2z\" (UID: \"38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.754991 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9675a84b-88dc-4a3c-8fe9-070088ada9b1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fcvw7\" (UID: \"9675a84b-88dc-4a3c-8fe9-070088ada9b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fcvw7" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755009 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-serving-cert\") pod \"route-controller-manager-6576b87f9c-pzsn6\" (UID: \"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755024 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/38f31f08-dc12-4feb-8567-ab19705f0e16-images\") pod \"machine-config-operator-74547568cd-pxqqx\" (UID: \"38f31f08-dc12-4feb-8567-ab19705f0e16\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxqqx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755040 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvk7z\" (UniqueName: \"kubernetes.io/projected/d64fcdf8-100d-4628-beb2-126a10b8f71c-kube-api-access-cvk7z\") pod \"migrator-59844c95c7-spsbw\" (UID: \"d64fcdf8-100d-4628-beb2-126a10b8f71c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-spsbw" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755055 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-trusted-ca-bundle\") pod \"console-f9d7485db-b7rfx\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755072 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/901e060a-e400-40cd-bd50-d8bfb7c5127a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ks47b\" (UID: \"901e060a-e400-40cd-bd50-d8bfb7c5127a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ks47b" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755086 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755101 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt4x5\" (UniqueName: \"kubernetes.io/projected/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-kube-api-access-qt4x5\") pod \"route-controller-manager-6576b87f9c-pzsn6\" (UID: \"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755116 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/49c82763-4d39-4424-8aa0-745158bd96c6-stats-auth\") pod \"router-default-5444994796-hfmz9\" (UID: \"49c82763-4d39-4424-8aa0-745158bd96c6\") " pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755133 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28dc9895-00d3-4e72-930a-ea9b0ca468c4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f2dvf\" (UID: \"28dc9895-00d3-4e72-930a-ea9b0ca468c4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2dvf" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755149 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44b1f98a-591c-49e4-9d2a-a0130f336528-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hr46r\" (UID: \"44b1f98a-591c-49e4-9d2a-a0130f336528\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hr46r" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755164 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/31b7f38e-5f91-43bf-bba4-bc8592747704-audit-dir\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755179 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755196 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/62fa5de9-a571-40e5-a32c-e1708a428f19-available-featuregates\") pod \"openshift-config-operator-7777fb866f-w6x8t\" (UID: \"62fa5de9-a571-40e5-a32c-e1708a428f19\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755210 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/901e060a-e400-40cd-bd50-d8bfb7c5127a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ks47b\" (UID: \"901e060a-e400-40cd-bd50-d8bfb7c5127a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ks47b" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755225 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/31b7f38e-5f91-43bf-bba4-bc8592747704-etcd-serving-ca\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755241 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbrlk\" (UniqueName: \"kubernetes.io/projected/87e8f397-20cd-469f-924d-204ce1a8db47-kube-api-access-sbrlk\") pod \"authentication-operator-69f744f599-wz942\" (UID: \"87e8f397-20cd-469f-924d-204ce1a8db47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755259 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6be45ad2-fe1c-4b29-8aa8-c5eec39978a3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-658pn\" (UID: \"6be45ad2-fe1c-4b29-8aa8-c5eec39978a3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658pn" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755295 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-audit-policies\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755315 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/94c146b4-f621-42ff-b0db-5e471b8938b6-etcd-client\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755329 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/94c146b4-f621-42ff-b0db-5e471b8938b6-audit-policies\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755345 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87e8f397-20cd-469f-924d-204ce1a8db47-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wz942\" (UID: \"87e8f397-20cd-469f-924d-204ce1a8db47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755360 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2qhw\" (UniqueName: \"kubernetes.io/projected/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-kube-api-access-f2qhw\") pod \"console-f9d7485db-b7rfx\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755380 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19166153-66f0-4f4f-8f4b-ef7af5a72770-config\") pod \"machine-approver-56656f9798-x6zxm\" (UID: \"19166153-66f0-4f4f-8f4b-ef7af5a72770\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755395 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/050ec804-2082-4b39-8699-28d1c1992425-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nnqfp\" (UID: \"050ec804-2082-4b39-8699-28d1c1992425\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nnqfp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755412 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5fzs\" (UniqueName: \"kubernetes.io/projected/53109c03-d846-4eaa-a01e-7aca23a720f6-kube-api-access-m5fzs\") pod \"machine-config-controller-84d6567774-n4mbr\" (UID: \"53109c03-d846-4eaa-a01e-7aca23a720f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4mbr" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755430 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4321ba4d-fc67-4945-a86a-9b6f30ab66ce-metrics-tls\") pod \"dns-operator-744455d44c-wp4hr\" (UID: \"4321ba4d-fc67-4945-a86a-9b6f30ab66ce\") " pod="openshift-dns-operator/dns-operator-744455d44c-wp4hr" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755446 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/31b7f38e-5f91-43bf-bba4-bc8592747704-encryption-config\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755464 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f28a792f-4814-4a24-ab79-3a5b00adb25e-trusted-ca\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755480 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2850071-0204-4052-b9f3-863243d3300b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gr2s6\" (UID: \"c2850071-0204-4052-b9f3-863243d3300b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gr2s6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755497 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755513 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87e8f397-20cd-469f-924d-204ce1a8db47-config\") pod \"authentication-operator-69f744f599-wz942\" (UID: \"87e8f397-20cd-469f-924d-204ce1a8db47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755528 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755542 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-client-ca\") pod \"route-controller-manager-6576b87f9c-pzsn6\" (UID: \"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755558 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7037e0f8-e094-40a4-9188-7dc2fdd1b4a6-images\") pod \"machine-api-operator-5694c8668f-m7l28\" (UID: \"7037e0f8-e094-40a4-9188-7dc2fdd1b4a6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m7l28" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755583 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6lks\" (UniqueName: \"kubernetes.io/projected/44b1f98a-591c-49e4-9d2a-a0130f336528-kube-api-access-j6lks\") pod \"kube-storage-version-migrator-operator-b67b599dd-hr46r\" (UID: \"44b1f98a-591c-49e4-9d2a-a0130f336528\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hr46r" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755598 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg7zr\" (UniqueName: \"kubernetes.io/projected/38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95-kube-api-access-sg7zr\") pod \"olm-operator-6b444d44fb-hhs2z\" (UID: \"38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755616 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/49c82763-4d39-4424-8aa0-745158bd96c6-default-certificate\") pod \"router-default-5444994796-hfmz9\" (UID: \"49c82763-4d39-4424-8aa0-745158bd96c6\") " pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755631 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/31b7f38e-5f91-43bf-bba4-bc8592747704-etcd-client\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755646 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28dc9895-00d3-4e72-930a-ea9b0ca468c4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f2dvf\" (UID: \"28dc9895-00d3-4e72-930a-ea9b0ca468c4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2dvf" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755663 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755681 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755697 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94c146b4-f621-42ff-b0db-5e471b8938b6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755715 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f28a792f-4814-4a24-ab79-3a5b00adb25e-bound-sa-token\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755729 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjc5p\" (UniqueName: \"kubernetes.io/projected/4c536f6e-dc3f-407b-81bc-ad0febbae611-kube-api-access-cjc5p\") pod \"openshift-apiserver-operator-796bbdcf4f-qdcck\" (UID: \"4c536f6e-dc3f-407b-81bc-ad0febbae611\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdcck" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755745 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv2vw\" (UniqueName: \"kubernetes.io/projected/f28a792f-4814-4a24-ab79-3a5b00adb25e-kube-api-access-qv2vw\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755760 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/19166153-66f0-4f4f-8f4b-ef7af5a72770-machine-approver-tls\") pod \"machine-approver-56656f9798-x6zxm\" (UID: \"19166153-66f0-4f4f-8f4b-ef7af5a72770\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755778 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwl4p\" (UniqueName: \"kubernetes.io/projected/38f31f08-dc12-4feb-8567-ab19705f0e16-kube-api-access-nwl4p\") pod \"machine-config-operator-74547568cd-pxqqx\" (UID: \"38f31f08-dc12-4feb-8567-ab19705f0e16\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxqqx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755794 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7037e0f8-e094-40a4-9188-7dc2fdd1b4a6-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-m7l28\" (UID: \"7037e0f8-e094-40a4-9188-7dc2fdd1b4a6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m7l28" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755810 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9h78\" (UniqueName: \"kubernetes.io/projected/586d0874-ebe1-41db-b596-1dfed12b2b94-kube-api-access-c9h78\") pod \"downloads-7954f5f757-bz9v2\" (UID: \"586d0874-ebe1-41db-b596-1dfed12b2b94\") " pod="openshift-console/downloads-7954f5f757-bz9v2" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755826 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-service-ca\") pod \"console-f9d7485db-b7rfx\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755841 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f59b453-c693-4382-b7f5-82d3c8ee48e9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-29ww6\" (UID: \"4f59b453-c693-4382-b7f5-82d3c8ee48e9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-29ww6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755857 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c2850071-0204-4052-b9f3-863243d3300b-metrics-tls\") pod \"ingress-operator-5b745b69d9-gr2s6\" (UID: \"c2850071-0204-4052-b9f3-863243d3300b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gr2s6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755872 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8832f47a-79fe-4045-91d9-d42f21a2652f-etcd-ca\") pod \"etcd-operator-b45778765-xqnqh\" (UID: \"8832f47a-79fe-4045-91d9-d42f21a2652f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755891 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfpm7\" (UniqueName: \"kubernetes.io/projected/4321ba4d-fc67-4945-a86a-9b6f30ab66ce-kube-api-access-rfpm7\") pod \"dns-operator-744455d44c-wp4hr\" (UID: \"4321ba4d-fc67-4945-a86a-9b6f30ab66ce\") " pod="openshift-dns-operator/dns-operator-744455d44c-wp4hr" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755907 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f28a792f-4814-4a24-ab79-3a5b00adb25e-registry-tls\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755923 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755941 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwr5r\" (UniqueName: \"kubernetes.io/projected/901e060a-e400-40cd-bd50-d8bfb7c5127a-kube-api-access-nwr5r\") pod \"cluster-image-registry-operator-dc59b4c8b-ks47b\" (UID: \"901e060a-e400-40cd-bd50-d8bfb7c5127a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ks47b" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755957 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-config\") pod \"route-controller-manager-6576b87f9c-pzsn6\" (UID: \"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755972 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-88srp\" (UID: \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.755988 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-console-serving-cert\") pod \"console-f9d7485db-b7rfx\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756005 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f28a792f-4814-4a24-ab79-3a5b00adb25e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756022 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87e8f397-20cd-469f-924d-204ce1a8db47-service-ca-bundle\") pod \"authentication-operator-69f744f599-wz942\" (UID: \"87e8f397-20cd-469f-924d-204ce1a8db47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756041 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/38f31f08-dc12-4feb-8567-ab19705f0e16-proxy-tls\") pod \"machine-config-operator-74547568cd-pxqqx\" (UID: \"38f31f08-dc12-4feb-8567-ab19705f0e16\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxqqx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756056 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95-srv-cert\") pod \"olm-operator-6b444d44fb-hhs2z\" (UID: \"38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756071 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b79gp\" (UniqueName: \"kubernetes.io/projected/62fa5de9-a571-40e5-a32c-e1708a428f19-kube-api-access-b79gp\") pod \"openshift-config-operator-7777fb866f-w6x8t\" (UID: \"62fa5de9-a571-40e5-a32c-e1708a428f19\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756086 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkqhx\" (UniqueName: \"kubernetes.io/projected/8832f47a-79fe-4045-91d9-d42f21a2652f-kube-api-access-jkqhx\") pod \"etcd-operator-b45778765-xqnqh\" (UID: \"8832f47a-79fe-4045-91d9-d42f21a2652f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756103 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5vnt\" (UniqueName: \"kubernetes.io/projected/19166153-66f0-4f4f-8f4b-ef7af5a72770-kube-api-access-t5vnt\") pod \"machine-approver-56656f9798-x6zxm\" (UID: \"19166153-66f0-4f4f-8f4b-ef7af5a72770\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756119 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8832f47a-79fe-4045-91d9-d42f21a2652f-serving-cert\") pod \"etcd-operator-b45778765-xqnqh\" (UID: \"8832f47a-79fe-4045-91d9-d42f21a2652f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756135 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8832f47a-79fe-4045-91d9-d42f21a2652f-etcd-service-ca\") pod \"etcd-operator-b45778765-xqnqh\" (UID: \"8832f47a-79fe-4045-91d9-d42f21a2652f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756152 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f28a792f-4814-4a24-ab79-3a5b00adb25e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756170 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c9f6cb7-17aa-420e-a9b9-af42bd6e4caf-config\") pod \"kube-apiserver-operator-766d6c64bb-k47hd\" (UID: \"1c9f6cb7-17aa-420e-a9b9-af42bd6e4caf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k47hd" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756186 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62fa5de9-a571-40e5-a32c-e1708a428f19-serving-cert\") pod \"openshift-config-operator-7777fb866f-w6x8t\" (UID: \"62fa5de9-a571-40e5-a32c-e1708a428f19\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756205 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/53109c03-d846-4eaa-a01e-7aca23a720f6-proxy-tls\") pod \"machine-config-controller-84d6567774-n4mbr\" (UID: \"53109c03-d846-4eaa-a01e-7aca23a720f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4mbr" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756220 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxfmh\" (UniqueName: \"kubernetes.io/projected/c2850071-0204-4052-b9f3-863243d3300b-kube-api-access-nxfmh\") pod \"ingress-operator-5b745b69d9-gr2s6\" (UID: \"c2850071-0204-4052-b9f3-863243d3300b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gr2s6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756236 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-console-config\") pod \"console-f9d7485db-b7rfx\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756250 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31b7f38e-5f91-43bf-bba4-bc8592747704-config\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756265 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlvbg\" (UniqueName: \"kubernetes.io/projected/94c146b4-f621-42ff-b0db-5e471b8938b6-kube-api-access-vlvbg\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756304 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb49v\" (UniqueName: \"kubernetes.io/projected/050ec804-2082-4b39-8699-28d1c1992425-kube-api-access-pb49v\") pod \"openshift-controller-manager-operator-756b6f6bc6-nnqfp\" (UID: \"050ec804-2082-4b39-8699-28d1c1992425\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nnqfp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756326 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-console-oauth-config\") pod \"console-f9d7485db-b7rfx\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756345 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f743d3ca-28a7-4e25-955f-1385b9ef8c05-audit-dir\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756348 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ks47b"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756363 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/31b7f38e-5f91-43bf-bba4-bc8592747704-audit\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756459 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/050ec804-2082-4b39-8699-28d1c1992425-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nnqfp\" (UID: \"050ec804-2082-4b39-8699-28d1c1992425\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nnqfp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756480 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c536f6e-dc3f-407b-81bc-ad0febbae611-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qdcck\" (UID: \"4c536f6e-dc3f-407b-81bc-ad0febbae611\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdcck" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756519 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f28a792f-4814-4a24-ab79-3a5b00adb25e-registry-certificates\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756539 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f59b453-c693-4382-b7f5-82d3c8ee48e9-config\") pod \"kube-controller-manager-operator-78b949d7b-29ww6\" (UID: \"4f59b453-c693-4382-b7f5-82d3c8ee48e9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-29ww6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756557 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c82763-4d39-4424-8aa0-745158bd96c6-service-ca-bundle\") pod \"router-default-5444994796-hfmz9\" (UID: \"49c82763-4d39-4424-8aa0-745158bd96c6\") " pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.756574 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f25h\" (UniqueName: \"kubernetes.io/projected/7037e0f8-e094-40a4-9188-7dc2fdd1b4a6-kube-api-access-4f25h\") pod \"machine-api-operator-5694c8668f-m7l28\" (UID: \"7037e0f8-e094-40a4-9188-7dc2fdd1b4a6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m7l28" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.757960 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44b1f98a-591c-49e4-9d2a-a0130f336528-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hr46r\" (UID: \"44b1f98a-591c-49e4-9d2a-a0130f336528\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hr46r" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.757985 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.758008 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c9f6cb7-17aa-420e-a9b9-af42bd6e4caf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k47hd\" (UID: \"1c9f6cb7-17aa-420e-a9b9-af42bd6e4caf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k47hd" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.758025 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87e8f397-20cd-469f-924d-204ce1a8db47-serving-cert\") pod \"authentication-operator-69f744f599-wz942\" (UID: \"87e8f397-20cd-469f-924d-204ce1a8db47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.758042 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31b7f38e-5f91-43bf-bba4-bc8592747704-serving-cert\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.758086 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8832f47a-79fe-4045-91d9-d42f21a2652f-etcd-client\") pod \"etcd-operator-b45778765-xqnqh\" (UID: \"8832f47a-79fe-4045-91d9-d42f21a2652f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.758107 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/53109c03-d846-4eaa-a01e-7aca23a720f6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-n4mbr\" (UID: \"53109c03-d846-4eaa-a01e-7aca23a720f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4mbr" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.758125 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-config\") pod \"controller-manager-879f6c89f-88srp\" (UID: \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.758141 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-oauth-serving-cert\") pod \"console-f9d7485db-b7rfx\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.758157 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nwbk\" (UniqueName: \"kubernetes.io/projected/31b7f38e-5f91-43bf-bba4-bc8592747704-kube-api-access-5nwbk\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.758172 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64kxm\" (UniqueName: \"kubernetes.io/projected/f743d3ca-28a7-4e25-955f-1385b9ef8c05-kube-api-access-64kxm\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.758223 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28dc9895-00d3-4e72-930a-ea9b0ca468c4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f2dvf\" (UID: \"28dc9895-00d3-4e72-930a-ea9b0ca468c4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2dvf" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.759130 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/53109c03-d846-4eaa-a01e-7aca23a720f6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-n4mbr\" (UID: \"53109c03-d846-4eaa-a01e-7aca23a720f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4mbr" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.760197 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28dc9895-00d3-4e72-930a-ea9b0ca468c4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f2dvf\" (UID: \"28dc9895-00d3-4e72-930a-ea9b0ca468c4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2dvf" Dec 12 15:48:44 crc kubenswrapper[4693]: E1212 15:48:44.760439 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:45.260416322 +0000 UTC m=+152.429055994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.760444 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f28a792f-4814-4a24-ab79-3a5b00adb25e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.760771 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f28a792f-4814-4a24-ab79-3a5b00adb25e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.759934 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdcck"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.761443 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qp87x"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.761858 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f28a792f-4814-4a24-ab79-3a5b00adb25e-registry-certificates\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.763880 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.763944 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gr2s6"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.764677 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.764949 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4321ba4d-fc67-4945-a86a-9b6f30ab66ce-metrics-tls\") pod \"dns-operator-744455d44c-wp4hr\" (UID: \"4321ba4d-fc67-4945-a86a-9b6f30ab66ce\") " pod="openshift-dns-operator/dns-operator-744455d44c-wp4hr" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.765961 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pqr7h"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.767225 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xqnqh"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.768081 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658pn"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.768910 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.769883 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bz9v2"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.772091 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hr46r"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.772117 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fcvw7"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.773259 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wm8d5"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.779146 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/53109c03-d846-4eaa-a01e-7aca23a720f6-proxy-tls\") pod \"machine-config-controller-84d6567774-n4mbr\" (UID: \"53109c03-d846-4eaa-a01e-7aca23a720f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4mbr" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.779465 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f28a792f-4814-4a24-ab79-3a5b00adb25e-registry-tls\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.780226 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.780413 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-m7l28"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.782111 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.783287 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wp4hr"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.785530 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k47hd"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.785638 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wcl2w"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.786210 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-spsbw"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.797008 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.798160 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.802403 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.803635 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-42cgp"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.804425 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-42cgp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.805431 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wwxcz"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.806753 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.808603 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-qd9z7"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.808649 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-npwzs"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.809988 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wwxcz"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.811569 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bwgbj"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.813434 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-42cgp"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.814985 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-wd8bp"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.816125 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wd8bp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.816439 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wd8bp"] Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.819127 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.837593 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.858904 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/31b7f38e-5f91-43bf-bba4-bc8592747704-audit\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.858943 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/050ec804-2082-4b39-8699-28d1c1992425-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nnqfp\" (UID: \"050ec804-2082-4b39-8699-28d1c1992425\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nnqfp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.858962 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c536f6e-dc3f-407b-81bc-ad0febbae611-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qdcck\" (UID: \"4c536f6e-dc3f-407b-81bc-ad0febbae611\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdcck" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.858978 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f25h\" (UniqueName: \"kubernetes.io/projected/7037e0f8-e094-40a4-9188-7dc2fdd1b4a6-kube-api-access-4f25h\") pod \"machine-api-operator-5694c8668f-m7l28\" (UID: \"7037e0f8-e094-40a4-9188-7dc2fdd1b4a6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m7l28" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.858995 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f59b453-c693-4382-b7f5-82d3c8ee48e9-config\") pod \"kube-controller-manager-operator-78b949d7b-29ww6\" (UID: \"4f59b453-c693-4382-b7f5-82d3c8ee48e9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-29ww6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859012 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c82763-4d39-4424-8aa0-745158bd96c6-service-ca-bundle\") pod \"router-default-5444994796-hfmz9\" (UID: \"49c82763-4d39-4424-8aa0-745158bd96c6\") " pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859030 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859046 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44b1f98a-591c-49e4-9d2a-a0130f336528-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hr46r\" (UID: \"44b1f98a-591c-49e4-9d2a-a0130f336528\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hr46r" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859062 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859076 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c9f6cb7-17aa-420e-a9b9-af42bd6e4caf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k47hd\" (UID: \"1c9f6cb7-17aa-420e-a9b9-af42bd6e4caf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k47hd" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859096 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/80dd1d93-b2bd-4fad-b199-aa072c2c8216-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pqr7h\" (UID: \"80dd1d93-b2bd-4fad-b199-aa072c2c8216\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pqr7h" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859115 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31b7f38e-5f91-43bf-bba4-bc8592747704-serving-cert\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859129 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87e8f397-20cd-469f-924d-204ce1a8db47-serving-cert\") pod \"authentication-operator-69f744f599-wz942\" (UID: \"87e8f397-20cd-469f-924d-204ce1a8db47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859145 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8832f47a-79fe-4045-91d9-d42f21a2652f-etcd-client\") pod \"etcd-operator-b45778765-xqnqh\" (UID: \"8832f47a-79fe-4045-91d9-d42f21a2652f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859160 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-oauth-serving-cert\") pod \"console-f9d7485db-b7rfx\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859174 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nwbk\" (UniqueName: \"kubernetes.io/projected/31b7f38e-5f91-43bf-bba4-bc8592747704-kube-api-access-5nwbk\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859188 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64kxm\" (UniqueName: \"kubernetes.io/projected/f743d3ca-28a7-4e25-955f-1385b9ef8c05-kube-api-access-64kxm\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859203 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-config\") pod \"controller-manager-879f6c89f-88srp\" (UID: \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859220 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrg2q\" (UniqueName: \"kubernetes.io/projected/61620225-2125-49da-94f6-f6ef9dd7e6ce-kube-api-access-lrg2q\") pod \"packageserver-d55dfcdfc-d86dw\" (UID: \"61620225-2125-49da-94f6-f6ef9dd7e6ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859234 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjwsj\" (UniqueName: \"kubernetes.io/projected/7b1c4746-f772-49d8-be11-9abc850ea7e2-kube-api-access-kjwsj\") pod \"marketplace-operator-79b997595-npwzs\" (UID: \"7b1c4746-f772-49d8-be11-9abc850ea7e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859250 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859289 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/901e060a-e400-40cd-bd50-d8bfb7c5127a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ks47b\" (UID: \"901e060a-e400-40cd-bd50-d8bfb7c5127a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ks47b" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859305 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7037e0f8-e094-40a4-9188-7dc2fdd1b4a6-config\") pod \"machine-api-operator-5694c8668f-m7l28\" (UID: \"7037e0f8-e094-40a4-9188-7dc2fdd1b4a6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m7l28" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859322 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c9f6cb7-17aa-420e-a9b9-af42bd6e4caf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k47hd\" (UID: \"1c9f6cb7-17aa-420e-a9b9-af42bd6e4caf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k47hd" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859341 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f59b453-c693-4382-b7f5-82d3c8ee48e9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-29ww6\" (UID: \"4f59b453-c693-4382-b7f5-82d3c8ee48e9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-29ww6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859357 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw27f\" (UniqueName: \"kubernetes.io/projected/49c82763-4d39-4424-8aa0-745158bd96c6-kube-api-access-kw27f\") pod \"router-default-5444994796-hfmz9\" (UID: \"49c82763-4d39-4424-8aa0-745158bd96c6\") " pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859374 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-client-ca\") pod \"controller-manager-879f6c89f-88srp\" (UID: \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859394 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859416 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hts9j\" (UniqueName: \"kubernetes.io/projected/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-kube-api-access-hts9j\") pod \"controller-manager-879f6c89f-88srp\" (UID: \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859440 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/31b7f38e-5f91-43bf-bba4-bc8592747704-node-pullsecrets\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859458 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/31b7f38e-5f91-43bf-bba4-bc8592747704-image-import-ca\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859477 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61620225-2125-49da-94f6-f6ef9dd7e6ce-apiservice-cert\") pod \"packageserver-d55dfcdfc-d86dw\" (UID: \"61620225-2125-49da-94f6-f6ef9dd7e6ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859495 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b1c4746-f772-49d8-be11-9abc850ea7e2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-npwzs\" (UID: \"7b1c4746-f772-49d8-be11-9abc850ea7e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859513 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31b7f38e-5f91-43bf-bba4-bc8592747704-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859530 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbt64\" (UniqueName: \"kubernetes.io/projected/9675a84b-88dc-4a3c-8fe9-070088ada9b1-kube-api-access-xbt64\") pod \"cluster-samples-operator-665b6dd947-fcvw7\" (UID: \"9675a84b-88dc-4a3c-8fe9-070088ada9b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fcvw7" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859545 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/94c146b4-f621-42ff-b0db-5e471b8938b6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859560 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/38f31f08-dc12-4feb-8567-ab19705f0e16-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pxqqx\" (UID: \"38f31f08-dc12-4feb-8567-ab19705f0e16\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxqqx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859574 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49c82763-4d39-4424-8aa0-745158bd96c6-metrics-certs\") pod \"router-default-5444994796-hfmz9\" (UID: \"49c82763-4d39-4424-8aa0-745158bd96c6\") " pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859590 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-serving-cert\") pod \"controller-manager-879f6c89f-88srp\" (UID: \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859604 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c536f6e-dc3f-407b-81bc-ad0febbae611-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qdcck\" (UID: \"4c536f6e-dc3f-407b-81bc-ad0febbae611\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdcck" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859621 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s28nw\" (UniqueName: \"kubernetes.io/projected/6be45ad2-fe1c-4b29-8aa8-c5eec39978a3-kube-api-access-s28nw\") pod \"control-plane-machine-set-operator-78cbb6b69f-658pn\" (UID: \"6be45ad2-fe1c-4b29-8aa8-c5eec39978a3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658pn" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859642 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859660 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/94c146b4-f621-42ff-b0db-5e471b8938b6-encryption-config\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859677 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2850071-0204-4052-b9f3-863243d3300b-trusted-ca\") pod \"ingress-operator-5b745b69d9-gr2s6\" (UID: \"c2850071-0204-4052-b9f3-863243d3300b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gr2s6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859704 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94c146b4-f621-42ff-b0db-5e471b8938b6-serving-cert\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859719 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8832f47a-79fe-4045-91d9-d42f21a2652f-config\") pod \"etcd-operator-b45778765-xqnqh\" (UID: \"8832f47a-79fe-4045-91d9-d42f21a2652f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859736 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d3b86c37-5764-4b23-b927-ad4a77885456-profile-collector-cert\") pod \"catalog-operator-68c6474976-r6qvl\" (UID: \"d3b86c37-5764-4b23-b927-ad4a77885456\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859751 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1baa0c14-c23b-401c-b20f-3789ff63a4c1-signing-key\") pod \"service-ca-9c57cc56f-qd9z7\" (UID: \"1baa0c14-c23b-401c-b20f-3789ff63a4c1\") " pod="openshift-service-ca/service-ca-9c57cc56f-qd9z7" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859773 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94c146b4-f621-42ff-b0db-5e471b8938b6-audit-dir\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859789 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hhs2z\" (UID: \"38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859805 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9675a84b-88dc-4a3c-8fe9-070088ada9b1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fcvw7\" (UID: \"9675a84b-88dc-4a3c-8fe9-070088ada9b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fcvw7" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859822 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fw9t\" (UniqueName: \"kubernetes.io/projected/f51bf74b-1d86-4a22-a355-f2c64a6516e5-kube-api-access-9fw9t\") pod \"console-operator-58897d9998-wcl2w\" (UID: \"f51bf74b-1d86-4a22-a355-f2c64a6516e5\") " pod="openshift-console-operator/console-operator-58897d9998-wcl2w" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859838 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f92dec3b-e25a-4f3f-a004-e85cc51093c5-config-volume\") pod \"collect-profiles-29425905-5fvqm\" (UID: \"f92dec3b-e25a-4f3f-a004-e85cc51093c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859855 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-serving-cert\") pod \"route-controller-manager-6576b87f9c-pzsn6\" (UID: \"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859871 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-trusted-ca-bundle\") pod \"console-f9d7485db-b7rfx\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859887 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/901e060a-e400-40cd-bd50-d8bfb7c5127a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ks47b\" (UID: \"901e060a-e400-40cd-bd50-d8bfb7c5127a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ks47b" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859902 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/38f31f08-dc12-4feb-8567-ab19705f0e16-images\") pod \"machine-config-operator-74547568cd-pxqqx\" (UID: \"38f31f08-dc12-4feb-8567-ab19705f0e16\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxqqx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859922 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvk7z\" (UniqueName: \"kubernetes.io/projected/d64fcdf8-100d-4628-beb2-126a10b8f71c-kube-api-access-cvk7z\") pod \"migrator-59844c95c7-spsbw\" (UID: \"d64fcdf8-100d-4628-beb2-126a10b8f71c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-spsbw" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859944 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859959 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt4x5\" (UniqueName: \"kubernetes.io/projected/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-kube-api-access-qt4x5\") pod \"route-controller-manager-6576b87f9c-pzsn6\" (UID: \"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859973 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/49c82763-4d39-4424-8aa0-745158bd96c6-stats-auth\") pod \"router-default-5444994796-hfmz9\" (UID: \"49c82763-4d39-4424-8aa0-745158bd96c6\") " pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.859988 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1baa0c14-c23b-401c-b20f-3789ff63a4c1-signing-cabundle\") pod \"service-ca-9c57cc56f-qd9z7\" (UID: \"1baa0c14-c23b-401c-b20f-3789ff63a4c1\") " pod="openshift-service-ca/service-ca-9c57cc56f-qd9z7" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860003 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19df2367-f186-4892-83e6-bee3c8177dc2-serving-cert\") pod \"service-ca-operator-777779d784-bwgbj\" (UID: \"19df2367-f186-4892-83e6-bee3c8177dc2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bwgbj" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860021 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/31b7f38e-5f91-43bf-bba4-bc8592747704-audit-dir\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860037 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44b1f98a-591c-49e4-9d2a-a0130f336528-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hr46r\" (UID: \"44b1f98a-591c-49e4-9d2a-a0130f336528\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hr46r" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860054 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp7lh\" (UniqueName: \"kubernetes.io/projected/1baa0c14-c23b-401c-b20f-3789ff63a4c1-kube-api-access-gp7lh\") pod \"service-ca-9c57cc56f-qd9z7\" (UID: \"1baa0c14-c23b-401c-b20f-3789ff63a4c1\") " pod="openshift-service-ca/service-ca-9c57cc56f-qd9z7" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860069 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860085 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/62fa5de9-a571-40e5-a32c-e1708a428f19-available-featuregates\") pod \"openshift-config-operator-7777fb866f-w6x8t\" (UID: \"62fa5de9-a571-40e5-a32c-e1708a428f19\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860102 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/901e060a-e400-40cd-bd50-d8bfb7c5127a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ks47b\" (UID: \"901e060a-e400-40cd-bd50-d8bfb7c5127a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ks47b" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860117 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f51bf74b-1d86-4a22-a355-f2c64a6516e5-trusted-ca\") pod \"console-operator-58897d9998-wcl2w\" (UID: \"f51bf74b-1d86-4a22-a355-f2c64a6516e5\") " pod="openshift-console-operator/console-operator-58897d9998-wcl2w" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860134 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/31b7f38e-5f91-43bf-bba4-bc8592747704-etcd-serving-ca\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860148 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrlk\" (UniqueName: \"kubernetes.io/projected/87e8f397-20cd-469f-924d-204ce1a8db47-kube-api-access-sbrlk\") pod \"authentication-operator-69f744f599-wz942\" (UID: \"87e8f397-20cd-469f-924d-204ce1a8db47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860165 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b1c4746-f772-49d8-be11-9abc850ea7e2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-npwzs\" (UID: \"7b1c4746-f772-49d8-be11-9abc850ea7e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860183 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b56t9\" (UniqueName: \"kubernetes.io/projected/80dd1d93-b2bd-4fad-b199-aa072c2c8216-kube-api-access-b56t9\") pod \"package-server-manager-789f6589d5-pqr7h\" (UID: \"80dd1d93-b2bd-4fad-b199-aa072c2c8216\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pqr7h" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860201 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6be45ad2-fe1c-4b29-8aa8-c5eec39978a3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-658pn\" (UID: \"6be45ad2-fe1c-4b29-8aa8-c5eec39978a3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658pn" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860216 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-audit-policies\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860231 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/94c146b4-f621-42ff-b0db-5e471b8938b6-etcd-client\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860248 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87e8f397-20cd-469f-924d-204ce1a8db47-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wz942\" (UID: \"87e8f397-20cd-469f-924d-204ce1a8db47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860264 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/94c146b4-f621-42ff-b0db-5e471b8938b6-audit-policies\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860301 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2qhw\" (UniqueName: \"kubernetes.io/projected/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-kube-api-access-f2qhw\") pod \"console-f9d7485db-b7rfx\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860316 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f92dec3b-e25a-4f3f-a004-e85cc51093c5-secret-volume\") pod \"collect-profiles-29425905-5fvqm\" (UID: \"f92dec3b-e25a-4f3f-a004-e85cc51093c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860333 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl2tz\" (UniqueName: \"kubernetes.io/projected/f92dec3b-e25a-4f3f-a004-e85cc51093c5-kube-api-access-xl2tz\") pod \"collect-profiles-29425905-5fvqm\" (UID: \"f92dec3b-e25a-4f3f-a004-e85cc51093c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860347 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/45a4ae20-7daa-42b4-9801-c9613c7fd508-certs\") pod \"machine-config-server-qnfsm\" (UID: \"45a4ae20-7daa-42b4-9801-c9613c7fd508\") " pod="openshift-machine-config-operator/machine-config-server-qnfsm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860361 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51bf74b-1d86-4a22-a355-f2c64a6516e5-config\") pod \"console-operator-58897d9998-wcl2w\" (UID: \"f51bf74b-1d86-4a22-a355-f2c64a6516e5\") " pod="openshift-console-operator/console-operator-58897d9998-wcl2w" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860375 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f51bf74b-1d86-4a22-a355-f2c64a6516e5-serving-cert\") pod \"console-operator-58897d9998-wcl2w\" (UID: \"f51bf74b-1d86-4a22-a355-f2c64a6516e5\") " pod="openshift-console-operator/console-operator-58897d9998-wcl2w" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860400 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/050ec804-2082-4b39-8699-28d1c1992425-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nnqfp\" (UID: \"050ec804-2082-4b39-8699-28d1c1992425\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nnqfp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860421 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtc78\" (UniqueName: \"kubernetes.io/projected/b59eec2f-4046-439e-a4c3-1201ccdd8cd5-kube-api-access-wtc78\") pod \"multus-admission-controller-857f4d67dd-qp87x\" (UID: \"b59eec2f-4046-439e-a4c3-1201ccdd8cd5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qp87x" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860442 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnc9z\" (UniqueName: \"kubernetes.io/projected/19df2367-f186-4892-83e6-bee3c8177dc2-kube-api-access-gnc9z\") pod \"service-ca-operator-777779d784-bwgbj\" (UID: \"19df2367-f186-4892-83e6-bee3c8177dc2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bwgbj" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860459 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/31b7f38e-5f91-43bf-bba4-bc8592747704-encryption-config\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860481 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2850071-0204-4052-b9f3-863243d3300b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gr2s6\" (UID: \"c2850071-0204-4052-b9f3-863243d3300b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gr2s6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860499 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87e8f397-20cd-469f-924d-204ce1a8db47-config\") pod \"authentication-operator-69f744f599-wz942\" (UID: \"87e8f397-20cd-469f-924d-204ce1a8db47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860515 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860530 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860546 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-client-ca\") pod \"route-controller-manager-6576b87f9c-pzsn6\" (UID: \"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860561 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7037e0f8-e094-40a4-9188-7dc2fdd1b4a6-images\") pod \"machine-api-operator-5694c8668f-m7l28\" (UID: \"7037e0f8-e094-40a4-9188-7dc2fdd1b4a6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m7l28" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860577 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6lks\" (UniqueName: \"kubernetes.io/projected/44b1f98a-591c-49e4-9d2a-a0130f336528-kube-api-access-j6lks\") pod \"kube-storage-version-migrator-operator-b67b599dd-hr46r\" (UID: \"44b1f98a-591c-49e4-9d2a-a0130f336528\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hr46r" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860592 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg7zr\" (UniqueName: \"kubernetes.io/projected/38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95-kube-api-access-sg7zr\") pod \"olm-operator-6b444d44fb-hhs2z\" (UID: \"38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860608 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/49c82763-4d39-4424-8aa0-745158bd96c6-default-certificate\") pod \"router-default-5444994796-hfmz9\" (UID: \"49c82763-4d39-4424-8aa0-745158bd96c6\") " pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860622 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/31b7f38e-5f91-43bf-bba4-bc8592747704-etcd-client\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860638 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860654 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860669 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94c146b4-f621-42ff-b0db-5e471b8938b6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860697 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwl4p\" (UniqueName: \"kubernetes.io/projected/38f31f08-dc12-4feb-8567-ab19705f0e16-kube-api-access-nwl4p\") pod \"machine-config-operator-74547568cd-pxqqx\" (UID: \"38f31f08-dc12-4feb-8567-ab19705f0e16\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxqqx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860713 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjc5p\" (UniqueName: \"kubernetes.io/projected/4c536f6e-dc3f-407b-81bc-ad0febbae611-kube-api-access-cjc5p\") pod \"openshift-apiserver-operator-796bbdcf4f-qdcck\" (UID: \"4c536f6e-dc3f-407b-81bc-ad0febbae611\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdcck" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860733 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7037e0f8-e094-40a4-9188-7dc2fdd1b4a6-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-m7l28\" (UID: \"7037e0f8-e094-40a4-9188-7dc2fdd1b4a6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m7l28" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860753 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9h78\" (UniqueName: \"kubernetes.io/projected/586d0874-ebe1-41db-b596-1dfed12b2b94-kube-api-access-c9h78\") pod \"downloads-7954f5f757-bz9v2\" (UID: \"586d0874-ebe1-41db-b596-1dfed12b2b94\") " pod="openshift-console/downloads-7954f5f757-bz9v2" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860767 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-service-ca\") pod \"console-f9d7485db-b7rfx\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860781 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f59b453-c693-4382-b7f5-82d3c8ee48e9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-29ww6\" (UID: \"4f59b453-c693-4382-b7f5-82d3c8ee48e9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-29ww6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860798 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c2850071-0204-4052-b9f3-863243d3300b-metrics-tls\") pod \"ingress-operator-5b745b69d9-gr2s6\" (UID: \"c2850071-0204-4052-b9f3-863243d3300b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gr2s6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860812 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8832f47a-79fe-4045-91d9-d42f21a2652f-etcd-ca\") pod \"etcd-operator-b45778765-xqnqh\" (UID: \"8832f47a-79fe-4045-91d9-d42f21a2652f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860828 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/61620225-2125-49da-94f6-f6ef9dd7e6ce-tmpfs\") pod \"packageserver-d55dfcdfc-d86dw\" (UID: \"61620225-2125-49da-94f6-f6ef9dd7e6ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860850 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860867 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-console-serving-cert\") pod \"console-f9d7485db-b7rfx\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860884 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwr5r\" (UniqueName: \"kubernetes.io/projected/901e060a-e400-40cd-bd50-d8bfb7c5127a-kube-api-access-nwr5r\") pod \"cluster-image-registry-operator-dc59b4c8b-ks47b\" (UID: \"901e060a-e400-40cd-bd50-d8bfb7c5127a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ks47b" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860900 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-config\") pod \"route-controller-manager-6576b87f9c-pzsn6\" (UID: \"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860914 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-88srp\" (UID: \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860930 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p82t9\" (UniqueName: \"kubernetes.io/projected/d3b86c37-5764-4b23-b927-ad4a77885456-kube-api-access-p82t9\") pod \"catalog-operator-68c6474976-r6qvl\" (UID: \"d3b86c37-5764-4b23-b927-ad4a77885456\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860948 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d3b86c37-5764-4b23-b927-ad4a77885456-srv-cert\") pod \"catalog-operator-68c6474976-r6qvl\" (UID: \"d3b86c37-5764-4b23-b927-ad4a77885456\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860965 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87e8f397-20cd-469f-924d-204ce1a8db47-service-ca-bundle\") pod \"authentication-operator-69f744f599-wz942\" (UID: \"87e8f397-20cd-469f-924d-204ce1a8db47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860982 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95-srv-cert\") pod \"olm-operator-6b444d44fb-hhs2z\" (UID: \"38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.860998 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b79gp\" (UniqueName: \"kubernetes.io/projected/62fa5de9-a571-40e5-a32c-e1708a428f19-kube-api-access-b79gp\") pod \"openshift-config-operator-7777fb866f-w6x8t\" (UID: \"62fa5de9-a571-40e5-a32c-e1708a428f19\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.861016 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/38f31f08-dc12-4feb-8567-ab19705f0e16-proxy-tls\") pod \"machine-config-operator-74547568cd-pxqqx\" (UID: \"38f31f08-dc12-4feb-8567-ab19705f0e16\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxqqx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.861036 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8832f47a-79fe-4045-91d9-d42f21a2652f-serving-cert\") pod \"etcd-operator-b45778765-xqnqh\" (UID: \"8832f47a-79fe-4045-91d9-d42f21a2652f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.861051 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8832f47a-79fe-4045-91d9-d42f21a2652f-etcd-service-ca\") pod \"etcd-operator-b45778765-xqnqh\" (UID: \"8832f47a-79fe-4045-91d9-d42f21a2652f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.861067 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkqhx\" (UniqueName: \"kubernetes.io/projected/8832f47a-79fe-4045-91d9-d42f21a2652f-kube-api-access-jkqhx\") pod \"etcd-operator-b45778765-xqnqh\" (UID: \"8832f47a-79fe-4045-91d9-d42f21a2652f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.861082 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b59eec2f-4046-439e-a4c3-1201ccdd8cd5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qp87x\" (UID: \"b59eec2f-4046-439e-a4c3-1201ccdd8cd5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qp87x" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.861102 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c9f6cb7-17aa-420e-a9b9-af42bd6e4caf-config\") pod \"kube-apiserver-operator-766d6c64bb-k47hd\" (UID: \"1c9f6cb7-17aa-420e-a9b9-af42bd6e4caf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k47hd" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.861119 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62fa5de9-a571-40e5-a32c-e1708a428f19-serving-cert\") pod \"openshift-config-operator-7777fb866f-w6x8t\" (UID: \"62fa5de9-a571-40e5-a32c-e1708a428f19\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.861136 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxfmh\" (UniqueName: \"kubernetes.io/projected/c2850071-0204-4052-b9f3-863243d3300b-kube-api-access-nxfmh\") pod \"ingress-operator-5b745b69d9-gr2s6\" (UID: \"c2850071-0204-4052-b9f3-863243d3300b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gr2s6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.861153 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61620225-2125-49da-94f6-f6ef9dd7e6ce-webhook-cert\") pod \"packageserver-d55dfcdfc-d86dw\" (UID: \"61620225-2125-49da-94f6-f6ef9dd7e6ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.861170 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-console-config\") pod \"console-f9d7485db-b7rfx\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.861185 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31b7f38e-5f91-43bf-bba4-bc8592747704-config\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.861201 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlvbg\" (UniqueName: \"kubernetes.io/projected/94c146b4-f621-42ff-b0db-5e471b8938b6-kube-api-access-vlvbg\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.861217 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb49v\" (UniqueName: \"kubernetes.io/projected/050ec804-2082-4b39-8699-28d1c1992425-kube-api-access-pb49v\") pod \"openshift-controller-manager-operator-756b6f6bc6-nnqfp\" (UID: \"050ec804-2082-4b39-8699-28d1c1992425\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nnqfp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.861234 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2c7m\" (UniqueName: \"kubernetes.io/projected/45a4ae20-7daa-42b4-9801-c9613c7fd508-kube-api-access-d2c7m\") pod \"machine-config-server-qnfsm\" (UID: \"45a4ae20-7daa-42b4-9801-c9613c7fd508\") " pod="openshift-machine-config-operator/machine-config-server-qnfsm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.861249 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19df2367-f186-4892-83e6-bee3c8177dc2-config\") pod \"service-ca-operator-777779d784-bwgbj\" (UID: \"19df2367-f186-4892-83e6-bee3c8177dc2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bwgbj" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.861282 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-console-oauth-config\") pod \"console-f9d7485db-b7rfx\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.861303 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f743d3ca-28a7-4e25-955f-1385b9ef8c05-audit-dir\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.861319 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/45a4ae20-7daa-42b4-9801-c9613c7fd508-node-bootstrap-token\") pod \"machine-config-server-qnfsm\" (UID: \"45a4ae20-7daa-42b4-9801-c9613c7fd508\") " pod="openshift-machine-config-operator/machine-config-server-qnfsm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.861980 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/31b7f38e-5f91-43bf-bba4-bc8592747704-audit\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.862491 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/050ec804-2082-4b39-8699-28d1c1992425-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nnqfp\" (UID: \"050ec804-2082-4b39-8699-28d1c1992425\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nnqfp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.863187 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f59b453-c693-4382-b7f5-82d3c8ee48e9-config\") pod \"kube-controller-manager-operator-78b949d7b-29ww6\" (UID: \"4f59b453-c693-4382-b7f5-82d3c8ee48e9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-29ww6" Dec 12 15:48:44 crc kubenswrapper[4693]: E1212 15:48:44.863463 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:45.363451639 +0000 UTC m=+152.532091240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.866470 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.868790 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31b7f38e-5f91-43bf-bba4-bc8592747704-serving-cert\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.871185 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87e8f397-20cd-469f-924d-204ce1a8db47-serving-cert\") pod \"authentication-operator-69f744f599-wz942\" (UID: \"87e8f397-20cd-469f-924d-204ce1a8db47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.872839 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.873664 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-oauth-serving-cert\") pod \"console-f9d7485db-b7rfx\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.873679 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8832f47a-79fe-4045-91d9-d42f21a2652f-config\") pod \"etcd-operator-b45778765-xqnqh\" (UID: \"8832f47a-79fe-4045-91d9-d42f21a2652f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.874156 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87e8f397-20cd-469f-924d-204ce1a8db47-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wz942\" (UID: \"87e8f397-20cd-469f-924d-204ce1a8db47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.874203 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/94c146b4-f621-42ff-b0db-5e471b8938b6-audit-policies\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.874231 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94c146b4-f621-42ff-b0db-5e471b8938b6-audit-dir\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.874733 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-config\") pod \"controller-manager-879f6c89f-88srp\" (UID: \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.875844 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.878142 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7037e0f8-e094-40a4-9188-7dc2fdd1b4a6-config\") pod \"machine-api-operator-5694c8668f-m7l28\" (UID: \"7037e0f8-e094-40a4-9188-7dc2fdd1b4a6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m7l28" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.878928 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/050ec804-2082-4b39-8699-28d1c1992425-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nnqfp\" (UID: \"050ec804-2082-4b39-8699-28d1c1992425\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nnqfp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.878995 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-serving-cert\") pod \"route-controller-manager-6576b87f9c-pzsn6\" (UID: \"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.879107 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-config\") pod \"route-controller-manager-6576b87f9c-pzsn6\" (UID: \"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.879614 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/38f31f08-dc12-4feb-8567-ab19705f0e16-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pxqqx\" (UID: \"38f31f08-dc12-4feb-8567-ab19705f0e16\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxqqx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.879650 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/94c146b4-f621-42ff-b0db-5e471b8938b6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.879872 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.879993 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-console-config\") pod \"console-f9d7485db-b7rfx\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.880088 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/901e060a-e400-40cd-bd50-d8bfb7c5127a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ks47b\" (UID: \"901e060a-e400-40cd-bd50-d8bfb7c5127a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ks47b" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.880362 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-console-serving-cert\") pod \"console-f9d7485db-b7rfx\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.880488 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/31b7f38e-5f91-43bf-bba4-bc8592747704-node-pullsecrets\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.880583 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31b7f38e-5f91-43bf-bba4-bc8592747704-config\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.880588 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-trusted-ca-bundle\") pod \"console-f9d7485db-b7rfx\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.881105 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-service-ca\") pod \"console-f9d7485db-b7rfx\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.881451 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/62fa5de9-a571-40e5-a32c-e1708a428f19-available-featuregates\") pod \"openshift-config-operator-7777fb866f-w6x8t\" (UID: \"62fa5de9-a571-40e5-a32c-e1708a428f19\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.881639 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f743d3ca-28a7-4e25-955f-1385b9ef8c05-audit-dir\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.881894 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/31b7f38e-5f91-43bf-bba4-bc8592747704-image-import-ca\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.881965 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/38f31f08-dc12-4feb-8567-ab19705f0e16-images\") pod \"machine-config-operator-74547568cd-pxqqx\" (UID: \"38f31f08-dc12-4feb-8567-ab19705f0e16\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxqqx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.882086 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31b7f38e-5f91-43bf-bba4-bc8592747704-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.882134 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8832f47a-79fe-4045-91d9-d42f21a2652f-etcd-client\") pod \"etcd-operator-b45778765-xqnqh\" (UID: \"8832f47a-79fe-4045-91d9-d42f21a2652f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.882564 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94c146b4-f621-42ff-b0db-5e471b8938b6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.882599 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-88srp\" (UID: \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.882712 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8832f47a-79fe-4045-91d9-d42f21a2652f-etcd-ca\") pod \"etcd-operator-b45778765-xqnqh\" (UID: \"8832f47a-79fe-4045-91d9-d42f21a2652f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.882773 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87e8f397-20cd-469f-924d-204ce1a8db47-config\") pod \"authentication-operator-69f744f599-wz942\" (UID: \"87e8f397-20cd-469f-924d-204ce1a8db47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.882793 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.883027 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87e8f397-20cd-469f-924d-204ce1a8db47-service-ca-bundle\") pod \"authentication-operator-69f744f599-wz942\" (UID: \"87e8f397-20cd-469f-924d-204ce1a8db47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.883438 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/31b7f38e-5f91-43bf-bba4-bc8592747704-etcd-serving-ca\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.883486 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.883556 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-client-ca\") pod \"controller-manager-879f6c89f-88srp\" (UID: \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.883667 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/31b7f38e-5f91-43bf-bba4-bc8592747704-audit-dir\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.883767 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8832f47a-79fe-4045-91d9-d42f21a2652f-etcd-service-ca\") pod \"etcd-operator-b45778765-xqnqh\" (UID: \"8832f47a-79fe-4045-91d9-d42f21a2652f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.884322 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-serving-cert\") pod \"controller-manager-879f6c89f-88srp\" (UID: \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.884401 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.884535 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-audit-policies\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.884652 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-console-oauth-config\") pod \"console-f9d7485db-b7rfx\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.884720 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f59b453-c693-4382-b7f5-82d3c8ee48e9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-29ww6\" (UID: \"4f59b453-c693-4382-b7f5-82d3c8ee48e9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-29ww6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.885020 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-client-ca\") pod \"route-controller-manager-6576b87f9c-pzsn6\" (UID: \"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.885641 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.888681 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/38f31f08-dc12-4feb-8567-ab19705f0e16-proxy-tls\") pod \"machine-config-operator-74547568cd-pxqqx\" (UID: \"38f31f08-dc12-4feb-8567-ab19705f0e16\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxqqx" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.888725 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/31b7f38e-5f91-43bf-bba4-bc8592747704-etcd-client\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.888767 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.888799 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.888807 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.888936 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/94c146b4-f621-42ff-b0db-5e471b8938b6-encryption-config\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.889072 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7037e0f8-e094-40a4-9188-7dc2fdd1b4a6-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-m7l28\" (UID: \"7037e0f8-e094-40a4-9188-7dc2fdd1b4a6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m7l28" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.889185 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94c146b4-f621-42ff-b0db-5e471b8938b6-serving-cert\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.889214 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/94c146b4-f621-42ff-b0db-5e471b8938b6-etcd-client\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.890717 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/31b7f38e-5f91-43bf-bba4-bc8592747704-encryption-config\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.890864 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8832f47a-79fe-4045-91d9-d42f21a2652f-serving-cert\") pod \"etcd-operator-b45778765-xqnqh\" (UID: \"8832f47a-79fe-4045-91d9-d42f21a2652f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.890884 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62fa5de9-a571-40e5-a32c-e1708a428f19-serving-cert\") pod \"openshift-config-operator-7777fb866f-w6x8t\" (UID: \"62fa5de9-a571-40e5-a32c-e1708a428f19\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.891554 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.892052 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.897481 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.905397 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7037e0f8-e094-40a4-9188-7dc2fdd1b4a6-images\") pod \"machine-api-operator-5694c8668f-m7l28\" (UID: \"7037e0f8-e094-40a4-9188-7dc2fdd1b4a6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m7l28" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.917755 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.937229 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.958068 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.962457 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:44 crc kubenswrapper[4693]: E1212 15:48:44.962529 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:45.462508453 +0000 UTC m=+152.631148054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.962669 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fw9t\" (UniqueName: \"kubernetes.io/projected/f51bf74b-1d86-4a22-a355-f2c64a6516e5-kube-api-access-9fw9t\") pod \"console-operator-58897d9998-wcl2w\" (UID: \"f51bf74b-1d86-4a22-a355-f2c64a6516e5\") " pod="openshift-console-operator/console-operator-58897d9998-wcl2w" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.962695 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f92dec3b-e25a-4f3f-a004-e85cc51093c5-config-volume\") pod \"collect-profiles-29425905-5fvqm\" (UID: \"f92dec3b-e25a-4f3f-a004-e85cc51093c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.962743 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1baa0c14-c23b-401c-b20f-3789ff63a4c1-signing-cabundle\") pod \"service-ca-9c57cc56f-qd9z7\" (UID: \"1baa0c14-c23b-401c-b20f-3789ff63a4c1\") " pod="openshift-service-ca/service-ca-9c57cc56f-qd9z7" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.962759 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19df2367-f186-4892-83e6-bee3c8177dc2-serving-cert\") pod \"service-ca-operator-777779d784-bwgbj\" (UID: \"19df2367-f186-4892-83e6-bee3c8177dc2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bwgbj" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.962781 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp7lh\" (UniqueName: \"kubernetes.io/projected/1baa0c14-c23b-401c-b20f-3789ff63a4c1-kube-api-access-gp7lh\") pod \"service-ca-9c57cc56f-qd9z7\" (UID: \"1baa0c14-c23b-401c-b20f-3789ff63a4c1\") " pod="openshift-service-ca/service-ca-9c57cc56f-qd9z7" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.962830 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f51bf74b-1d86-4a22-a355-f2c64a6516e5-trusted-ca\") pod \"console-operator-58897d9998-wcl2w\" (UID: \"f51bf74b-1d86-4a22-a355-f2c64a6516e5\") " pod="openshift-console-operator/console-operator-58897d9998-wcl2w" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.962860 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b1c4746-f772-49d8-be11-9abc850ea7e2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-npwzs\" (UID: \"7b1c4746-f772-49d8-be11-9abc850ea7e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.962931 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b56t9\" (UniqueName: \"kubernetes.io/projected/80dd1d93-b2bd-4fad-b199-aa072c2c8216-kube-api-access-b56t9\") pod \"package-server-manager-789f6589d5-pqr7h\" (UID: \"80dd1d93-b2bd-4fad-b199-aa072c2c8216\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pqr7h" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.962983 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx7g4\" (UniqueName: \"kubernetes.io/projected/20d2a90c-e5d1-4fa3-bc2f-cba5f3ea0157-kube-api-access-lx7g4\") pod \"ingress-canary-42cgp\" (UID: \"20d2a90c-e5d1-4fa3-bc2f-cba5f3ea0157\") " pod="openshift-ingress-canary/ingress-canary-42cgp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.963011 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ef8bea0e-6f25-4c4d-a294-f246fbff9926-registration-dir\") pod \"csi-hostpathplugin-wwxcz\" (UID: \"ef8bea0e-6f25-4c4d-a294-f246fbff9926\") " pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.963059 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f92dec3b-e25a-4f3f-a004-e85cc51093c5-secret-volume\") pod \"collect-profiles-29425905-5fvqm\" (UID: \"f92dec3b-e25a-4f3f-a004-e85cc51093c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.963164 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl2tz\" (UniqueName: \"kubernetes.io/projected/f92dec3b-e25a-4f3f-a004-e85cc51093c5-kube-api-access-xl2tz\") pod \"collect-profiles-29425905-5fvqm\" (UID: \"f92dec3b-e25a-4f3f-a004-e85cc51093c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.963186 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/45a4ae20-7daa-42b4-9801-c9613c7fd508-certs\") pod \"machine-config-server-qnfsm\" (UID: \"45a4ae20-7daa-42b4-9801-c9613c7fd508\") " pod="openshift-machine-config-operator/machine-config-server-qnfsm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.963384 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51bf74b-1d86-4a22-a355-f2c64a6516e5-config\") pod \"console-operator-58897d9998-wcl2w\" (UID: \"f51bf74b-1d86-4a22-a355-f2c64a6516e5\") " pod="openshift-console-operator/console-operator-58897d9998-wcl2w" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.963474 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f51bf74b-1d86-4a22-a355-f2c64a6516e5-serving-cert\") pod \"console-operator-58897d9998-wcl2w\" (UID: \"f51bf74b-1d86-4a22-a355-f2c64a6516e5\") " pod="openshift-console-operator/console-operator-58897d9998-wcl2w" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.963498 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtc78\" (UniqueName: \"kubernetes.io/projected/b59eec2f-4046-439e-a4c3-1201ccdd8cd5-kube-api-access-wtc78\") pod \"multus-admission-controller-857f4d67dd-qp87x\" (UID: \"b59eec2f-4046-439e-a4c3-1201ccdd8cd5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qp87x" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.963620 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnc9z\" (UniqueName: \"kubernetes.io/projected/19df2367-f186-4892-83e6-bee3c8177dc2-kube-api-access-gnc9z\") pod \"service-ca-operator-777779d784-bwgbj\" (UID: \"19df2367-f186-4892-83e6-bee3c8177dc2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bwgbj" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.963646 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ef8bea0e-6f25-4c4d-a294-f246fbff9926-socket-dir\") pod \"csi-hostpathplugin-wwxcz\" (UID: \"ef8bea0e-6f25-4c4d-a294-f246fbff9926\") " pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.963692 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2872c20a-d73b-43e0-a4c4-dc6238f5d60b-metrics-tls\") pod \"dns-default-wd8bp\" (UID: \"2872c20a-d73b-43e0-a4c4-dc6238f5d60b\") " pod="openshift-dns/dns-default-wd8bp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.963903 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ef8bea0e-6f25-4c4d-a294-f246fbff9926-csi-data-dir\") pod \"csi-hostpathplugin-wwxcz\" (UID: \"ef8bea0e-6f25-4c4d-a294-f246fbff9926\") " pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.964014 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ef8bea0e-6f25-4c4d-a294-f246fbff9926-mountpoint-dir\") pod \"csi-hostpathplugin-wwxcz\" (UID: \"ef8bea0e-6f25-4c4d-a294-f246fbff9926\") " pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.964218 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/61620225-2125-49da-94f6-f6ef9dd7e6ce-tmpfs\") pod \"packageserver-d55dfcdfc-d86dw\" (UID: \"61620225-2125-49da-94f6-f6ef9dd7e6ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.964620 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p82t9\" (UniqueName: \"kubernetes.io/projected/d3b86c37-5764-4b23-b927-ad4a77885456-kube-api-access-p82t9\") pod \"catalog-operator-68c6474976-r6qvl\" (UID: \"d3b86c37-5764-4b23-b927-ad4a77885456\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.964644 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20d2a90c-e5d1-4fa3-bc2f-cba5f3ea0157-cert\") pod \"ingress-canary-42cgp\" (UID: \"20d2a90c-e5d1-4fa3-bc2f-cba5f3ea0157\") " pod="openshift-ingress-canary/ingress-canary-42cgp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.964808 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/61620225-2125-49da-94f6-f6ef9dd7e6ce-tmpfs\") pod \"packageserver-d55dfcdfc-d86dw\" (UID: \"61620225-2125-49da-94f6-f6ef9dd7e6ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.964871 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d3b86c37-5764-4b23-b927-ad4a77885456-srv-cert\") pod \"catalog-operator-68c6474976-r6qvl\" (UID: \"d3b86c37-5764-4b23-b927-ad4a77885456\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.964953 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5z4x\" (UniqueName: \"kubernetes.io/projected/ef8bea0e-6f25-4c4d-a294-f246fbff9926-kube-api-access-g5z4x\") pod \"csi-hostpathplugin-wwxcz\" (UID: \"ef8bea0e-6f25-4c4d-a294-f246fbff9926\") " pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.965065 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b59eec2f-4046-439e-a4c3-1201ccdd8cd5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qp87x\" (UID: \"b59eec2f-4046-439e-a4c3-1201ccdd8cd5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qp87x" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.965101 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ef8bea0e-6f25-4c4d-a294-f246fbff9926-plugins-dir\") pod \"csi-hostpathplugin-wwxcz\" (UID: \"ef8bea0e-6f25-4c4d-a294-f246fbff9926\") " pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.965124 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61620225-2125-49da-94f6-f6ef9dd7e6ce-webhook-cert\") pod \"packageserver-d55dfcdfc-d86dw\" (UID: \"61620225-2125-49da-94f6-f6ef9dd7e6ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.965146 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2c7m\" (UniqueName: \"kubernetes.io/projected/45a4ae20-7daa-42b4-9801-c9613c7fd508-kube-api-access-d2c7m\") pod \"machine-config-server-qnfsm\" (UID: \"45a4ae20-7daa-42b4-9801-c9613c7fd508\") " pod="openshift-machine-config-operator/machine-config-server-qnfsm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.965164 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19df2367-f186-4892-83e6-bee3c8177dc2-config\") pod \"service-ca-operator-777779d784-bwgbj\" (UID: \"19df2367-f186-4892-83e6-bee3c8177dc2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bwgbj" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.965191 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/45a4ae20-7daa-42b4-9801-c9613c7fd508-node-bootstrap-token\") pod \"machine-config-server-qnfsm\" (UID: \"45a4ae20-7daa-42b4-9801-c9613c7fd508\") " pod="openshift-machine-config-operator/machine-config-server-qnfsm" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.965213 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2872c20a-d73b-43e0-a4c4-dc6238f5d60b-config-volume\") pod \"dns-default-wd8bp\" (UID: \"2872c20a-d73b-43e0-a4c4-dc6238f5d60b\") " pod="openshift-dns/dns-default-wd8bp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.965257 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.965307 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/80dd1d93-b2bd-4fad-b199-aa072c2c8216-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pqr7h\" (UID: \"80dd1d93-b2bd-4fad-b199-aa072c2c8216\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pqr7h" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.965353 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrg2q\" (UniqueName: \"kubernetes.io/projected/61620225-2125-49da-94f6-f6ef9dd7e6ce-kube-api-access-lrg2q\") pod \"packageserver-d55dfcdfc-d86dw\" (UID: \"61620225-2125-49da-94f6-f6ef9dd7e6ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.965369 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjwsj\" (UniqueName: \"kubernetes.io/projected/7b1c4746-f772-49d8-be11-9abc850ea7e2-kube-api-access-kjwsj\") pod \"marketplace-operator-79b997595-npwzs\" (UID: \"7b1c4746-f772-49d8-be11-9abc850ea7e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.965391 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw9n5\" (UniqueName: \"kubernetes.io/projected/2872c20a-d73b-43e0-a4c4-dc6238f5d60b-kube-api-access-zw9n5\") pod \"dns-default-wd8bp\" (UID: \"2872c20a-d73b-43e0-a4c4-dc6238f5d60b\") " pod="openshift-dns/dns-default-wd8bp" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.965453 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61620225-2125-49da-94f6-f6ef9dd7e6ce-apiservice-cert\") pod \"packageserver-d55dfcdfc-d86dw\" (UID: \"61620225-2125-49da-94f6-f6ef9dd7e6ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.965474 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b1c4746-f772-49d8-be11-9abc850ea7e2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-npwzs\" (UID: \"7b1c4746-f772-49d8-be11-9abc850ea7e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.965530 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d3b86c37-5764-4b23-b927-ad4a77885456-profile-collector-cert\") pod \"catalog-operator-68c6474976-r6qvl\" (UID: \"d3b86c37-5764-4b23-b927-ad4a77885456\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.965547 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1baa0c14-c23b-401c-b20f-3789ff63a4c1-signing-key\") pod \"service-ca-9c57cc56f-qd9z7\" (UID: \"1baa0c14-c23b-401c-b20f-3789ff63a4c1\") " pod="openshift-service-ca/service-ca-9c57cc56f-qd9z7" Dec 12 15:48:44 crc kubenswrapper[4693]: E1212 15:48:44.965676 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:45.465664364 +0000 UTC m=+152.634303955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.977601 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.985210 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c536f6e-dc3f-407b-81bc-ad0febbae611-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qdcck\" (UID: \"4c536f6e-dc3f-407b-81bc-ad0febbae611\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdcck" Dec 12 15:48:44 crc kubenswrapper[4693]: I1212 15:48:44.997164 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.003220 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c536f6e-dc3f-407b-81bc-ad0febbae611-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qdcck\" (UID: \"4c536f6e-dc3f-407b-81bc-ad0febbae611\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdcck" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.018104 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.038690 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.066396 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.066609 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:45.566592406 +0000 UTC m=+152.735232007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.066741 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2872c20a-d73b-43e0-a4c4-dc6238f5d60b-config-volume\") pod \"dns-default-wd8bp\" (UID: \"2872c20a-d73b-43e0-a4c4-dc6238f5d60b\") " pod="openshift-dns/dns-default-wd8bp" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.066839 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.067007 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw9n5\" (UniqueName: \"kubernetes.io/projected/2872c20a-d73b-43e0-a4c4-dc6238f5d60b-kube-api-access-zw9n5\") pod \"dns-default-wd8bp\" (UID: \"2872c20a-d73b-43e0-a4c4-dc6238f5d60b\") " pod="openshift-dns/dns-default-wd8bp" Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.067209 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:45.567197101 +0000 UTC m=+152.735836702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.067999 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx7g4\" (UniqueName: \"kubernetes.io/projected/20d2a90c-e5d1-4fa3-bc2f-cba5f3ea0157-kube-api-access-lx7g4\") pod \"ingress-canary-42cgp\" (UID: \"20d2a90c-e5d1-4fa3-bc2f-cba5f3ea0157\") " pod="openshift-ingress-canary/ingress-canary-42cgp" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.068043 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ef8bea0e-6f25-4c4d-a294-f246fbff9926-registration-dir\") pod \"csi-hostpathplugin-wwxcz\" (UID: \"ef8bea0e-6f25-4c4d-a294-f246fbff9926\") " pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.068138 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ef8bea0e-6f25-4c4d-a294-f246fbff9926-socket-dir\") pod \"csi-hostpathplugin-wwxcz\" (UID: \"ef8bea0e-6f25-4c4d-a294-f246fbff9926\") " pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.068171 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2872c20a-d73b-43e0-a4c4-dc6238f5d60b-metrics-tls\") pod \"dns-default-wd8bp\" (UID: \"2872c20a-d73b-43e0-a4c4-dc6238f5d60b\") " pod="openshift-dns/dns-default-wd8bp" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.068213 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ef8bea0e-6f25-4c4d-a294-f246fbff9926-csi-data-dir\") pod \"csi-hostpathplugin-wwxcz\" (UID: \"ef8bea0e-6f25-4c4d-a294-f246fbff9926\") " pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.068316 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ef8bea0e-6f25-4c4d-a294-f246fbff9926-mountpoint-dir\") pod \"csi-hostpathplugin-wwxcz\" (UID: \"ef8bea0e-6f25-4c4d-a294-f246fbff9926\") " pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.068364 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20d2a90c-e5d1-4fa3-bc2f-cba5f3ea0157-cert\") pod \"ingress-canary-42cgp\" (UID: \"20d2a90c-e5d1-4fa3-bc2f-cba5f3ea0157\") " pod="openshift-ingress-canary/ingress-canary-42cgp" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.068391 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5z4x\" (UniqueName: \"kubernetes.io/projected/ef8bea0e-6f25-4c4d-a294-f246fbff9926-kube-api-access-g5z4x\") pod \"csi-hostpathplugin-wwxcz\" (UID: \"ef8bea0e-6f25-4c4d-a294-f246fbff9926\") " pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.068448 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ef8bea0e-6f25-4c4d-a294-f246fbff9926-plugins-dir\") pod \"csi-hostpathplugin-wwxcz\" (UID: \"ef8bea0e-6f25-4c4d-a294-f246fbff9926\") " pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.068487 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ef8bea0e-6f25-4c4d-a294-f246fbff9926-socket-dir\") pod \"csi-hostpathplugin-wwxcz\" (UID: \"ef8bea0e-6f25-4c4d-a294-f246fbff9926\") " pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.068484 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ef8bea0e-6f25-4c4d-a294-f246fbff9926-csi-data-dir\") pod \"csi-hostpathplugin-wwxcz\" (UID: \"ef8bea0e-6f25-4c4d-a294-f246fbff9926\") " pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.068490 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ef8bea0e-6f25-4c4d-a294-f246fbff9926-mountpoint-dir\") pod \"csi-hostpathplugin-wwxcz\" (UID: \"ef8bea0e-6f25-4c4d-a294-f246fbff9926\") " pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.068553 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ef8bea0e-6f25-4c4d-a294-f246fbff9926-plugins-dir\") pod \"csi-hostpathplugin-wwxcz\" (UID: \"ef8bea0e-6f25-4c4d-a294-f246fbff9926\") " pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.068882 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ef8bea0e-6f25-4c4d-a294-f246fbff9926-registration-dir\") pod \"csi-hostpathplugin-wwxcz\" (UID: \"ef8bea0e-6f25-4c4d-a294-f246fbff9926\") " pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.077816 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.098071 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.106415 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c2850071-0204-4052-b9f3-863243d3300b-metrics-tls\") pod \"ingress-operator-5b745b69d9-gr2s6\" (UID: \"c2850071-0204-4052-b9f3-863243d3300b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gr2s6" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.126232 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.135051 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2850071-0204-4052-b9f3-863243d3300b-trusted-ca\") pod \"ingress-operator-5b745b69d9-gr2s6\" (UID: \"c2850071-0204-4052-b9f3-863243d3300b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gr2s6" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.138807 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.157799 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.169992 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.170146 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:45.670115804 +0000 UTC m=+152.838755445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.170544 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.171005 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:45.670989127 +0000 UTC m=+152.839628758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.177287 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.198538 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.208816 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/49c82763-4d39-4424-8aa0-745158bd96c6-default-certificate\") pod \"router-default-5444994796-hfmz9\" (UID: \"49c82763-4d39-4424-8aa0-745158bd96c6\") " pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.217296 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.226249 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/49c82763-4d39-4424-8aa0-745158bd96c6-stats-auth\") pod \"router-default-5444994796-hfmz9\" (UID: \"49c82763-4d39-4424-8aa0-745158bd96c6\") " pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.238147 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.244420 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49c82763-4d39-4424-8aa0-745158bd96c6-metrics-certs\") pod \"router-default-5444994796-hfmz9\" (UID: \"49c82763-4d39-4424-8aa0-745158bd96c6\") " pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.258151 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.265037 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c82763-4d39-4424-8aa0-745158bd96c6-service-ca-bundle\") pod \"router-default-5444994796-hfmz9\" (UID: \"49c82763-4d39-4424-8aa0-745158bd96c6\") " pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.272728 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.272913 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:45.772888594 +0000 UTC m=+152.941528195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.273038 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.273989 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:45.773943491 +0000 UTC m=+152.942583132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.277522 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.298041 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.318318 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.338749 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.349724 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9675a84b-88dc-4a3c-8fe9-070088ada9b1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fcvw7\" (UID: \"9675a84b-88dc-4a3c-8fe9-070088ada9b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fcvw7" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.357680 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.374298 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.374487 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:45.874430431 +0000 UTC m=+153.043070052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.374656 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.375530 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:45.875518919 +0000 UTC m=+153.044158520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.377577 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.398352 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.418092 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.432420 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c9f6cb7-17aa-420e-a9b9-af42bd6e4caf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k47hd\" (UID: \"1c9f6cb7-17aa-420e-a9b9-af42bd6e4caf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k47hd" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.438173 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.444987 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c9f6cb7-17aa-420e-a9b9-af42bd6e4caf-config\") pod \"kube-apiserver-operator-766d6c64bb-k47hd\" (UID: \"1c9f6cb7-17aa-420e-a9b9-af42bd6e4caf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k47hd" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.457973 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.475980 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.476244 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:45.976216545 +0000 UTC m=+153.144856146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.477746 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.477966 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.478238 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:45.978215377 +0000 UTC m=+153.146855008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.485416 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44b1f98a-591c-49e4-9d2a-a0130f336528-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hr46r\" (UID: \"44b1f98a-591c-49e4-9d2a-a0130f336528\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hr46r" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.498405 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.519252 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.527659 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44b1f98a-591c-49e4-9d2a-a0130f336528-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hr46r\" (UID: \"44b1f98a-591c-49e4-9d2a-a0130f336528\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hr46r" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.537584 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.559509 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.578504 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.580548 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.581063 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.080922634 +0000 UTC m=+153.249562275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.581320 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.582540 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.082512615 +0000 UTC m=+153.251152276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.587065 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6be45ad2-fe1c-4b29-8aa8-c5eec39978a3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-658pn\" (UID: \"6be45ad2-fe1c-4b29-8aa8-c5eec39978a3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658pn" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.598865 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.618986 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.637942 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.658243 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.679169 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.683565 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.684582 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.184560826 +0000 UTC m=+153.353200437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.688593 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hhs2z\" (UID: \"38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.689323 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f92dec3b-e25a-4f3f-a004-e85cc51093c5-secret-volume\") pod \"collect-profiles-29425905-5fvqm\" (UID: \"f92dec3b-e25a-4f3f-a004-e85cc51093c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.689987 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d3b86c37-5764-4b23-b927-ad4a77885456-profile-collector-cert\") pod \"catalog-operator-68c6474976-r6qvl\" (UID: \"d3b86c37-5764-4b23-b927-ad4a77885456\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.696512 4693 request.go:700] Waited for 1.012329539s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&limit=500&resourceVersion=0 Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.698616 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.707559 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95-srv-cert\") pod \"olm-operator-6b444d44fb-hhs2z\" (UID: \"38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.719003 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.738485 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.757508 4693 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.757615 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f28a792f-4814-4a24-ab79-3a5b00adb25e-trusted-ca podName:f28a792f-4814-4a24-ab79-3a5b00adb25e nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.257585874 +0000 UTC m=+153.426225505 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f28a792f-4814-4a24-ab79-3a5b00adb25e-trusted-ca") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.757687 4693 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.757867 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/19166153-66f0-4f4f-8f4b-ef7af5a72770-auth-proxy-config podName:19166153-66f0-4f4f-8f4b-ef7af5a72770 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.257839361 +0000 UTC m=+153.426479092 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/19166153-66f0-4f4f-8f4b-ef7af5a72770-auth-proxy-config") pod "machine-approver-56656f9798-x6zxm" (UID: "19166153-66f0-4f4f-8f4b-ef7af5a72770") : failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.758025 4693 secret.go:188] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.758090 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19166153-66f0-4f4f-8f4b-ef7af5a72770-machine-approver-tls podName:19166153-66f0-4f4f-8f4b-ef7af5a72770 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.258063766 +0000 UTC m=+153.426703367 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/19166153-66f0-4f4f-8f4b-ef7af5a72770-machine-approver-tls") pod "machine-approver-56656f9798-x6zxm" (UID: "19166153-66f0-4f4f-8f4b-ef7af5a72770") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.758253 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.760851 4693 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.760927 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/19166153-66f0-4f4f-8f4b-ef7af5a72770-config podName:19166153-66f0-4f4f-8f4b-ef7af5a72770 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.260916259 +0000 UTC m=+153.429555860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/19166153-66f0-4f4f-8f4b-ef7af5a72770-config") pod "machine-approver-56656f9798-x6zxm" (UID: "19166153-66f0-4f4f-8f4b-ef7af5a72770") : failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.777947 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.786084 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.787214 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.287188661 +0000 UTC m=+153.455828262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.798213 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.808859 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b59eec2f-4046-439e-a4c3-1201ccdd8cd5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qp87x\" (UID: \"b59eec2f-4046-439e-a4c3-1201ccdd8cd5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qp87x" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.818613 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.824827 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51bf74b-1d86-4a22-a355-f2c64a6516e5-config\") pod \"console-operator-58897d9998-wcl2w\" (UID: \"f51bf74b-1d86-4a22-a355-f2c64a6516e5\") " pod="openshift-console-operator/console-operator-58897d9998-wcl2w" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.838164 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.849687 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f51bf74b-1d86-4a22-a355-f2c64a6516e5-serving-cert\") pod \"console-operator-58897d9998-wcl2w\" (UID: \"f51bf74b-1d86-4a22-a355-f2c64a6516e5\") " pod="openshift-console-operator/console-operator-58897d9998-wcl2w" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.857774 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.880734 4693 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.880825 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/901e060a-e400-40cd-bd50-d8bfb7c5127a-trusted-ca podName:901e060a-e400-40cd-bd50-d8bfb7c5127a nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.380804097 +0000 UTC m=+153.549443718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/901e060a-e400-40cd-bd50-d8bfb7c5127a-trusted-ca") pod "cluster-image-registry-operator-dc59b4c8b-ks47b" (UID: "901e060a-e400-40cd-bd50-d8bfb7c5127a") : failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.883940 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.888140 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.888304 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.388248787 +0000 UTC m=+153.556888428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.889028 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.889432 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.389412537 +0000 UTC m=+153.558052138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.894887 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f51bf74b-1d86-4a22-a355-f2c64a6516e5-trusted-ca\") pod \"console-operator-58897d9998-wcl2w\" (UID: \"f51bf74b-1d86-4a22-a355-f2c64a6516e5\") " pod="openshift-console-operator/console-operator-58897d9998-wcl2w" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.899425 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.918489 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.939038 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.950497 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d3b86c37-5764-4b23-b927-ad4a77885456-srv-cert\") pod \"catalog-operator-68c6474976-r6qvl\" (UID: \"d3b86c37-5764-4b23-b927-ad4a77885456\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.958154 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.963083 4693 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.963147 4693 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.963149 4693 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.963200 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f92dec3b-e25a-4f3f-a004-e85cc51093c5-config-volume podName:f92dec3b-e25a-4f3f-a004-e85cc51093c5 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.463183264 +0000 UTC m=+153.631822865 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f92dec3b-e25a-4f3f-a004-e85cc51093c5-config-volume") pod "collect-profiles-29425905-5fvqm" (UID: "f92dec3b-e25a-4f3f-a004-e85cc51093c5") : failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.963150 4693 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.963216 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1baa0c14-c23b-401c-b20f-3789ff63a4c1-signing-cabundle podName:1baa0c14-c23b-401c-b20f-3789ff63a4c1 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.463209495 +0000 UTC m=+153.631849096 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/1baa0c14-c23b-401c-b20f-3789ff63a4c1-signing-cabundle") pod "service-ca-9c57cc56f-qd9z7" (UID: "1baa0c14-c23b-401c-b20f-3789ff63a4c1") : failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.963304 4693 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.963324 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b1c4746-f772-49d8-be11-9abc850ea7e2-marketplace-trusted-ca podName:7b1c4746-f772-49d8-be11-9abc850ea7e2 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.463265336 +0000 UTC m=+153.631904977 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b1c4746-f772-49d8-be11-9abc850ea7e2-marketplace-trusted-ca") pod "marketplace-operator-79b997595-npwzs" (UID: "7b1c4746-f772-49d8-be11-9abc850ea7e2") : failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.963363 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19df2367-f186-4892-83e6-bee3c8177dc2-serving-cert podName:19df2367-f186-4892-83e6-bee3c8177dc2 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.463343238 +0000 UTC m=+153.631982869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/19df2367-f186-4892-83e6-bee3c8177dc2-serving-cert") pod "service-ca-operator-777779d784-bwgbj" (UID: "19df2367-f186-4892-83e6-bee3c8177dc2") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.963406 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a4ae20-7daa-42b4-9801-c9613c7fd508-certs podName:45a4ae20-7daa-42b4-9801-c9613c7fd508 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.463389719 +0000 UTC m=+153.632029360 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/45a4ae20-7daa-42b4-9801-c9613c7fd508-certs") pod "machine-config-server-qnfsm" (UID: "45a4ae20-7daa-42b4-9801-c9613c7fd508") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.965300 4693 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.965391 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61620225-2125-49da-94f6-f6ef9dd7e6ce-webhook-cert podName:61620225-2125-49da-94f6-f6ef9dd7e6ce nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.46535845 +0000 UTC m=+153.633998051 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/61620225-2125-49da-94f6-f6ef9dd7e6ce-webhook-cert") pod "packageserver-d55dfcdfc-d86dw" (UID: "61620225-2125-49da-94f6-f6ef9dd7e6ce") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.965392 4693 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.965413 4693 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.965442 4693 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.965449 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/19df2367-f186-4892-83e6-bee3c8177dc2-config podName:19df2367-f186-4892-83e6-bee3c8177dc2 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.465440622 +0000 UTC m=+153.634080223 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/19df2367-f186-4892-83e6-bee3c8177dc2-config") pod "service-ca-operator-777779d784-bwgbj" (UID: "19df2367-f186-4892-83e6-bee3c8177dc2") : failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.965480 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a4ae20-7daa-42b4-9801-c9613c7fd508-node-bootstrap-token podName:45a4ae20-7daa-42b4-9801-c9613c7fd508 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.465458882 +0000 UTC m=+153.634098543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/45a4ae20-7daa-42b4-9801-c9613c7fd508-node-bootstrap-token") pod "machine-config-server-qnfsm" (UID: "45a4ae20-7daa-42b4-9801-c9613c7fd508") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.965514 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80dd1d93-b2bd-4fad-b199-aa072c2c8216-package-server-manager-serving-cert podName:80dd1d93-b2bd-4fad-b199-aa072c2c8216 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.465496993 +0000 UTC m=+153.634136734 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/80dd1d93-b2bd-4fad-b199-aa072c2c8216-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-pqr7h" (UID: "80dd1d93-b2bd-4fad-b199-aa072c2c8216") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.966565 4693 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.966601 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61620225-2125-49da-94f6-f6ef9dd7e6ce-apiservice-cert podName:61620225-2125-49da-94f6-f6ef9dd7e6ce nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.466593041 +0000 UTC m=+153.635232642 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/61620225-2125-49da-94f6-f6ef9dd7e6ce-apiservice-cert") pod "packageserver-d55dfcdfc-d86dw" (UID: "61620225-2125-49da-94f6-f6ef9dd7e6ce") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.966663 4693 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.966739 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b1c4746-f772-49d8-be11-9abc850ea7e2-marketplace-operator-metrics podName:7b1c4746-f772-49d8-be11-9abc850ea7e2 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.466715584 +0000 UTC m=+153.635355225 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7b1c4746-f772-49d8-be11-9abc850ea7e2-marketplace-operator-metrics") pod "marketplace-operator-79b997595-npwzs" (UID: "7b1c4746-f772-49d8-be11-9abc850ea7e2") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.966743 4693 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.966860 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1baa0c14-c23b-401c-b20f-3789ff63a4c1-signing-key podName:1baa0c14-c23b-401c-b20f-3789ff63a4c1 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.466835498 +0000 UTC m=+153.635475169 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/1baa0c14-c23b-401c-b20f-3789ff63a4c1-signing-key") pod "service-ca-9c57cc56f-qd9z7" (UID: "1baa0c14-c23b-401c-b20f-3789ff63a4c1") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.978338 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.990229 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:45 crc kubenswrapper[4693]: E1212 15:48:45.991554 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.491513809 +0000 UTC m=+153.660153450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:45 crc kubenswrapper[4693]: I1212 15:48:45.997842 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.029064 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.049715 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.058225 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 12 15:48:46 crc kubenswrapper[4693]: E1212 15:48:46.067706 4693 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:46 crc kubenswrapper[4693]: E1212 15:48:46.068102 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2872c20a-d73b-43e0-a4c4-dc6238f5d60b-config-volume podName:2872c20a-d73b-43e0-a4c4-dc6238f5d60b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.568076768 +0000 UTC m=+153.736716379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2872c20a-d73b-43e0-a4c4-dc6238f5d60b-config-volume") pod "dns-default-wd8bp" (UID: "2872c20a-d73b-43e0-a4c4-dc6238f5d60b") : failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:46 crc kubenswrapper[4693]: E1212 15:48:46.068746 4693 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:46 crc kubenswrapper[4693]: E1212 15:48:46.068835 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2872c20a-d73b-43e0-a4c4-dc6238f5d60b-metrics-tls podName:2872c20a-d73b-43e0-a4c4-dc6238f5d60b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.568813717 +0000 UTC m=+153.737453338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2872c20a-d73b-43e0-a4c4-dc6238f5d60b-metrics-tls") pod "dns-default-wd8bp" (UID: "2872c20a-d73b-43e0-a4c4-dc6238f5d60b") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:46 crc kubenswrapper[4693]: E1212 15:48:46.068754 4693 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:46 crc kubenswrapper[4693]: E1212 15:48:46.068874 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20d2a90c-e5d1-4fa3-bc2f-cba5f3ea0157-cert podName:20d2a90c-e5d1-4fa3-bc2f-cba5f3ea0157 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.568866458 +0000 UTC m=+153.737506079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/20d2a90c-e5d1-4fa3-bc2f-cba5f3ea0157-cert") pod "ingress-canary-42cgp" (UID: "20d2a90c-e5d1-4fa3-bc2f-cba5f3ea0157") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.078006 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.092214 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:46 crc kubenswrapper[4693]: E1212 15:48:46.092876 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.592854622 +0000 UTC m=+153.761494293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.098203 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.118227 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.137805 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.157820 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.178984 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.194523 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:46 crc kubenswrapper[4693]: E1212 15:48:46.194713 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.694683097 +0000 UTC m=+153.863322698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.195016 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:46 crc kubenswrapper[4693]: E1212 15:48:46.195549 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.695528168 +0000 UTC m=+153.864167859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.197475 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.217793 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.238457 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.258748 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.279029 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.296203 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:46 crc kubenswrapper[4693]: E1212 15:48:46.296421 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.796397448 +0000 UTC m=+153.965037049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.296887 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19166153-66f0-4f4f-8f4b-ef7af5a72770-config\") pod \"machine-approver-56656f9798-x6zxm\" (UID: \"19166153-66f0-4f4f-8f4b-ef7af5a72770\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.296958 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f28a792f-4814-4a24-ab79-3a5b00adb25e-trusted-ca\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.297051 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/19166153-66f0-4f4f-8f4b-ef7af5a72770-machine-approver-tls\") pod \"machine-approver-56656f9798-x6zxm\" (UID: \"19166153-66f0-4f4f-8f4b-ef7af5a72770\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.297464 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.297637 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/19166153-66f0-4f4f-8f4b-ef7af5a72770-auth-proxy-config\") pod \"machine-approver-56656f9798-x6zxm\" (UID: \"19166153-66f0-4f4f-8f4b-ef7af5a72770\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" Dec 12 15:48:46 crc kubenswrapper[4693]: E1212 15:48:46.297850 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.797838435 +0000 UTC m=+153.966478116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.298475 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.318915 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.337974 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.356726 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.377334 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.398738 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.399144 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/901e060a-e400-40cd-bd50-d8bfb7c5127a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ks47b\" (UID: \"901e060a-e400-40cd-bd50-d8bfb7c5127a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ks47b" Dec 12 15:48:46 crc kubenswrapper[4693]: E1212 15:48:46.399155 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.899129946 +0000 UTC m=+154.067769567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.399647 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:46 crc kubenswrapper[4693]: E1212 15:48:46.399929 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:46.899916286 +0000 UTC m=+154.068555897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.416849 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5fzs\" (UniqueName: \"kubernetes.io/projected/53109c03-d846-4eaa-a01e-7aca23a720f6-kube-api-access-m5fzs\") pod \"machine-config-controller-84d6567774-n4mbr\" (UID: \"53109c03-d846-4eaa-a01e-7aca23a720f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4mbr" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.416961 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4mbr" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.439977 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f28a792f-4814-4a24-ab79-3a5b00adb25e-bound-sa-token\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.454531 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv2vw\" (UniqueName: \"kubernetes.io/projected/f28a792f-4814-4a24-ab79-3a5b00adb25e-kube-api-access-qv2vw\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.483140 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28dc9895-00d3-4e72-930a-ea9b0ca468c4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f2dvf\" (UID: \"28dc9895-00d3-4e72-930a-ea9b0ca468c4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2dvf" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.491927 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfpm7\" (UniqueName: \"kubernetes.io/projected/4321ba4d-fc67-4945-a86a-9b6f30ab66ce-kube-api-access-rfpm7\") pod \"dns-operator-744455d44c-wp4hr\" (UID: \"4321ba4d-fc67-4945-a86a-9b6f30ab66ce\") " pod="openshift-dns-operator/dns-operator-744455d44c-wp4hr" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.501068 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:46 crc kubenswrapper[4693]: E1212 15:48:46.501355 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:47.001332901 +0000 UTC m=+154.169972512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.501586 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/80dd1d93-b2bd-4fad-b199-aa072c2c8216-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pqr7h\" (UID: \"80dd1d93-b2bd-4fad-b199-aa072c2c8216\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pqr7h" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.501626 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.501768 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61620225-2125-49da-94f6-f6ef9dd7e6ce-apiservice-cert\") pod \"packageserver-d55dfcdfc-d86dw\" (UID: \"61620225-2125-49da-94f6-f6ef9dd7e6ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.501799 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b1c4746-f772-49d8-be11-9abc850ea7e2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-npwzs\" (UID: \"7b1c4746-f772-49d8-be11-9abc850ea7e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.501962 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1baa0c14-c23b-401c-b20f-3789ff63a4c1-signing-key\") pod \"service-ca-9c57cc56f-qd9z7\" (UID: \"1baa0c14-c23b-401c-b20f-3789ff63a4c1\") " pod="openshift-service-ca/service-ca-9c57cc56f-qd9z7" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.502011 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f92dec3b-e25a-4f3f-a004-e85cc51093c5-config-volume\") pod \"collect-profiles-29425905-5fvqm\" (UID: \"f92dec3b-e25a-4f3f-a004-e85cc51093c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.502059 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1baa0c14-c23b-401c-b20f-3789ff63a4c1-signing-cabundle\") pod \"service-ca-9c57cc56f-qd9z7\" (UID: \"1baa0c14-c23b-401c-b20f-3789ff63a4c1\") " pod="openshift-service-ca/service-ca-9c57cc56f-qd9z7" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.502084 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19df2367-f186-4892-83e6-bee3c8177dc2-serving-cert\") pod \"service-ca-operator-777779d784-bwgbj\" (UID: \"19df2367-f186-4892-83e6-bee3c8177dc2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bwgbj" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.502148 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b1c4746-f772-49d8-be11-9abc850ea7e2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-npwzs\" (UID: \"7b1c4746-f772-49d8-be11-9abc850ea7e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.502210 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/45a4ae20-7daa-42b4-9801-c9613c7fd508-certs\") pod \"machine-config-server-qnfsm\" (UID: \"45a4ae20-7daa-42b4-9801-c9613c7fd508\") " pod="openshift-machine-config-operator/machine-config-server-qnfsm" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.502464 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61620225-2125-49da-94f6-f6ef9dd7e6ce-webhook-cert\") pod \"packageserver-d55dfcdfc-d86dw\" (UID: \"61620225-2125-49da-94f6-f6ef9dd7e6ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.502520 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19df2367-f186-4892-83e6-bee3c8177dc2-config\") pod \"service-ca-operator-777779d784-bwgbj\" (UID: \"19df2367-f186-4892-83e6-bee3c8177dc2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bwgbj" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.502589 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/45a4ae20-7daa-42b4-9801-c9613c7fd508-node-bootstrap-token\") pod \"machine-config-server-qnfsm\" (UID: \"45a4ae20-7daa-42b4-9801-c9613c7fd508\") " pod="openshift-machine-config-operator/machine-config-server-qnfsm" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.503238 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1baa0c14-c23b-401c-b20f-3789ff63a4c1-signing-cabundle\") pod \"service-ca-9c57cc56f-qd9z7\" (UID: \"1baa0c14-c23b-401c-b20f-3789ff63a4c1\") " pod="openshift-service-ca/service-ca-9c57cc56f-qd9z7" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.503382 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b1c4746-f772-49d8-be11-9abc850ea7e2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-npwzs\" (UID: \"7b1c4746-f772-49d8-be11-9abc850ea7e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" Dec 12 15:48:46 crc kubenswrapper[4693]: E1212 15:48:46.503734 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:47.003654371 +0000 UTC m=+154.172293982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.505097 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f92dec3b-e25a-4f3f-a004-e85cc51093c5-config-volume\") pod \"collect-profiles-29425905-5fvqm\" (UID: \"f92dec3b-e25a-4f3f-a004-e85cc51093c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.506573 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1baa0c14-c23b-401c-b20f-3789ff63a4c1-signing-key\") pod \"service-ca-9c57cc56f-qd9z7\" (UID: \"1baa0c14-c23b-401c-b20f-3789ff63a4c1\") " pod="openshift-service-ca/service-ca-9c57cc56f-qd9z7" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.506580 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19df2367-f186-4892-83e6-bee3c8177dc2-serving-cert\") pod \"service-ca-operator-777779d784-bwgbj\" (UID: \"19df2367-f186-4892-83e6-bee3c8177dc2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bwgbj" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.506606 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/45a4ae20-7daa-42b4-9801-c9613c7fd508-certs\") pod \"machine-config-server-qnfsm\" (UID: \"45a4ae20-7daa-42b4-9801-c9613c7fd508\") " pod="openshift-machine-config-operator/machine-config-server-qnfsm" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.507155 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19df2367-f186-4892-83e6-bee3c8177dc2-config\") pod \"service-ca-operator-777779d784-bwgbj\" (UID: \"19df2367-f186-4892-83e6-bee3c8177dc2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bwgbj" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.508498 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61620225-2125-49da-94f6-f6ef9dd7e6ce-webhook-cert\") pod \"packageserver-d55dfcdfc-d86dw\" (UID: \"61620225-2125-49da-94f6-f6ef9dd7e6ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.515569 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b1c4746-f772-49d8-be11-9abc850ea7e2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-npwzs\" (UID: \"7b1c4746-f772-49d8-be11-9abc850ea7e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.515677 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61620225-2125-49da-94f6-f6ef9dd7e6ce-apiservice-cert\") pod \"packageserver-d55dfcdfc-d86dw\" (UID: \"61620225-2125-49da-94f6-f6ef9dd7e6ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.516444 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/80dd1d93-b2bd-4fad-b199-aa072c2c8216-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pqr7h\" (UID: \"80dd1d93-b2bd-4fad-b199-aa072c2c8216\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pqr7h" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.517313 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/45a4ae20-7daa-42b4-9801-c9613c7fd508-node-bootstrap-token\") pod \"machine-config-server-qnfsm\" (UID: \"45a4ae20-7daa-42b4-9801-c9613c7fd508\") " pod="openshift-machine-config-operator/machine-config-server-qnfsm" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.517568 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.537920 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.557639 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.578207 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.597608 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.597709 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-n4mbr"] Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.603557 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:46 crc kubenswrapper[4693]: E1212 15:48:46.603820 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:47.103785702 +0000 UTC m=+154.272425343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.603927 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20d2a90c-e5d1-4fa3-bc2f-cba5f3ea0157-cert\") pod \"ingress-canary-42cgp\" (UID: \"20d2a90c-e5d1-4fa3-bc2f-cba5f3ea0157\") " pod="openshift-ingress-canary/ingress-canary-42cgp" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.604008 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2872c20a-d73b-43e0-a4c4-dc6238f5d60b-config-volume\") pod \"dns-default-wd8bp\" (UID: \"2872c20a-d73b-43e0-a4c4-dc6238f5d60b\") " pod="openshift-dns/dns-default-wd8bp" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.604038 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.604218 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2872c20a-d73b-43e0-a4c4-dc6238f5d60b-metrics-tls\") pod \"dns-default-wd8bp\" (UID: \"2872c20a-d73b-43e0-a4c4-dc6238f5d60b\") " pod="openshift-dns/dns-default-wd8bp" Dec 12 15:48:46 crc kubenswrapper[4693]: E1212 15:48:46.604502 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:47.10448596 +0000 UTC m=+154.273125561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.609099 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20d2a90c-e5d1-4fa3-bc2f-cba5f3ea0157-cert\") pod \"ingress-canary-42cgp\" (UID: \"20d2a90c-e5d1-4fa3-bc2f-cba5f3ea0157\") " pod="openshift-ingress-canary/ingress-canary-42cgp" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.618786 4693 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.637727 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.657997 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.670322 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2dvf" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.677926 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.688715 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2872c20a-d73b-43e0-a4c4-dc6238f5d60b-metrics-tls\") pod \"dns-default-wd8bp\" (UID: \"2872c20a-d73b-43e0-a4c4-dc6238f5d60b\") " pod="openshift-dns/dns-default-wd8bp" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.691697 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wp4hr" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.696601 4693 request.go:700] Waited for 1.880068999s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&limit=500&resourceVersion=0 Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.698249 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.705231 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:46 crc kubenswrapper[4693]: E1212 15:48:46.705578 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:47.205540266 +0000 UTC m=+154.374179877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.705639 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2872c20a-d73b-43e0-a4c4-dc6238f5d60b-config-volume\") pod \"dns-default-wd8bp\" (UID: \"2872c20a-d73b-43e0-a4c4-dc6238f5d60b\") " pod="openshift-dns/dns-default-wd8bp" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.706049 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:46 crc kubenswrapper[4693]: E1212 15:48:46.707852 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:47.207835294 +0000 UTC m=+154.376474895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.730822 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f25h\" (UniqueName: \"kubernetes.io/projected/7037e0f8-e094-40a4-9188-7dc2fdd1b4a6-kube-api-access-4f25h\") pod \"machine-api-operator-5694c8668f-m7l28\" (UID: \"7037e0f8-e094-40a4-9188-7dc2fdd1b4a6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m7l28" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.756076 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nwbk\" (UniqueName: \"kubernetes.io/projected/31b7f38e-5f91-43bf-bba4-bc8592747704-kube-api-access-5nwbk\") pod \"apiserver-76f77b778f-d28jp\" (UID: \"31b7f38e-5f91-43bf-bba4-bc8592747704\") " pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.772167 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64kxm\" (UniqueName: \"kubernetes.io/projected/f743d3ca-28a7-4e25-955f-1385b9ef8c05-kube-api-access-64kxm\") pod \"oauth-openshift-558db77b4-wm8d5\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.791540 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2qhw\" (UniqueName: \"kubernetes.io/projected/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-kube-api-access-f2qhw\") pod \"console-f9d7485db-b7rfx\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.809866 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:46 crc kubenswrapper[4693]: E1212 15:48:46.810373 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:47.310357687 +0000 UTC m=+154.478997288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.813472 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.816559 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwr5r\" (UniqueName: \"kubernetes.io/projected/901e060a-e400-40cd-bd50-d8bfb7c5127a-kube-api-access-nwr5r\") pod \"cluster-image-registry-operator-dc59b4c8b-ks47b\" (UID: \"901e060a-e400-40cd-bd50-d8bfb7c5127a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ks47b" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.836337 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c9f6cb7-17aa-420e-a9b9-af42bd6e4caf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k47hd\" (UID: \"1c9f6cb7-17aa-420e-a9b9-af42bd6e4caf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k47hd" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.851336 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.860331 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw27f\" (UniqueName: \"kubernetes.io/projected/49c82763-4d39-4424-8aa0-745158bd96c6-kube-api-access-kw27f\") pod \"router-default-5444994796-hfmz9\" (UID: \"49c82763-4d39-4424-8aa0-745158bd96c6\") " pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.865695 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2dvf"] Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.874331 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxfmh\" (UniqueName: \"kubernetes.io/projected/c2850071-0204-4052-b9f3-863243d3300b-kube-api-access-nxfmh\") pod \"ingress-operator-5b745b69d9-gr2s6\" (UID: \"c2850071-0204-4052-b9f3-863243d3300b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gr2s6" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.876401 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.894623 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hts9j\" (UniqueName: \"kubernetes.io/projected/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-kube-api-access-hts9j\") pod \"controller-manager-879f6c89f-88srp\" (UID: \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.907481 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.912263 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlvbg\" (UniqueName: \"kubernetes.io/projected/94c146b4-f621-42ff-b0db-5e471b8938b6-kube-api-access-vlvbg\") pod \"apiserver-7bbb656c7d-hw9b4\" (UID: \"94c146b4-f621-42ff-b0db-5e471b8938b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.913248 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:46 crc kubenswrapper[4693]: E1212 15:48:46.913658 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:47.41364539 +0000 UTC m=+154.582284991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.923982 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wp4hr"] Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.924573 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-m7l28" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.934144 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb49v\" (UniqueName: \"kubernetes.io/projected/050ec804-2082-4b39-8699-28d1c1992425-kube-api-access-pb49v\") pod \"openshift-controller-manager-operator-756b6f6bc6-nnqfp\" (UID: \"050ec804-2082-4b39-8699-28d1c1992425\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nnqfp" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.951037 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.954399 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s28nw\" (UniqueName: \"kubernetes.io/projected/6be45ad2-fe1c-4b29-8aa8-c5eec39978a3-kube-api-access-s28nw\") pod \"control-plane-machine-set-operator-78cbb6b69f-658pn\" (UID: \"6be45ad2-fe1c-4b29-8aa8-c5eec39978a3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658pn" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.980367 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2850071-0204-4052-b9f3-863243d3300b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gr2s6\" (UID: \"c2850071-0204-4052-b9f3-863243d3300b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gr2s6" Dec 12 15:48:46 crc kubenswrapper[4693]: I1212 15:48:46.995453 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/901e060a-e400-40cd-bd50-d8bfb7c5127a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ks47b\" (UID: \"901e060a-e400-40cd-bd50-d8bfb7c5127a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ks47b" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.010116 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k47hd" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.010928 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-b7rfx"] Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.014555 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.014968 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:47.514952652 +0000 UTC m=+154.683592253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.016312 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbt64\" (UniqueName: \"kubernetes.io/projected/9675a84b-88dc-4a3c-8fe9-070088ada9b1-kube-api-access-xbt64\") pod \"cluster-samples-operator-665b6dd947-fcvw7\" (UID: \"9675a84b-88dc-4a3c-8fe9-070088ada9b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fcvw7" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.021674 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658pn" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.043692 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvk7z\" (UniqueName: \"kubernetes.io/projected/d64fcdf8-100d-4628-beb2-126a10b8f71c-kube-api-access-cvk7z\") pod \"migrator-59844c95c7-spsbw\" (UID: \"d64fcdf8-100d-4628-beb2-126a10b8f71c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-spsbw" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.054155 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hfmz9" event={"ID":"49c82763-4d39-4424-8aa0-745158bd96c6","Type":"ContainerStarted","Data":"8032ea75ad21146771ce9e1a2fc453f7951401a296bb3979ebf352d8b4636b4f"} Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.069109 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b79gp\" (UniqueName: \"kubernetes.io/projected/62fa5de9-a571-40e5-a32c-e1708a428f19-kube-api-access-b79gp\") pod \"openshift-config-operator-7777fb866f-w6x8t\" (UID: \"62fa5de9-a571-40e5-a32c-e1708a428f19\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.071038 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wp4hr" event={"ID":"4321ba4d-fc67-4945-a86a-9b6f30ab66ce","Type":"ContainerStarted","Data":"3acbcb6ab091ebddee4f2d3b5b540f97ed187c4bf79e8f854ba5d16ec542084f"} Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.078031 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4mbr" event={"ID":"53109c03-d846-4eaa-a01e-7aca23a720f6","Type":"ContainerStarted","Data":"dcf71193e7df52145236e21fc866b5906a01ddc4012871ef60173a06e2f70301"} Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.078081 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4mbr" event={"ID":"53109c03-d846-4eaa-a01e-7aca23a720f6","Type":"ContainerStarted","Data":"7430856db30700410781a3df51b86686780fdbf2261214c7dd265a171e6ac135"} Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.078092 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4mbr" event={"ID":"53109c03-d846-4eaa-a01e-7aca23a720f6","Type":"ContainerStarted","Data":"229771ebbd141d31d6f2471ac482c9fc5c2354d002ce663176e63b248fa281fd"} Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.091092 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d28jp"] Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.096680 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nnqfp" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.096901 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.097456 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjc5p\" (UniqueName: \"kubernetes.io/projected/4c536f6e-dc3f-407b-81bc-ad0febbae611-kube-api-access-cjc5p\") pod \"openshift-apiserver-operator-796bbdcf4f-qdcck\" (UID: \"4c536f6e-dc3f-407b-81bc-ad0febbae611\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdcck" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.109466 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2dvf" event={"ID":"28dc9895-00d3-4e72-930a-ea9b0ca468c4","Type":"ContainerStarted","Data":"3ea7622ff954996a3df949a322e228443e284d693e7088a3bc8b2b71a7ea8177"} Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.109995 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f59b453-c693-4382-b7f5-82d3c8ee48e9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-29ww6\" (UID: \"4f59b453-c693-4382-b7f5-82d3c8ee48e9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-29ww6" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.116034 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.116436 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:47.616420087 +0000 UTC m=+154.785059688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.116981 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9h78\" (UniqueName: \"kubernetes.io/projected/586d0874-ebe1-41db-b596-1dfed12b2b94-kube-api-access-c9h78\") pod \"downloads-7954f5f757-bz9v2\" (UID: \"586d0874-ebe1-41db-b596-1dfed12b2b94\") " pod="openshift-console/downloads-7954f5f757-bz9v2" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.129748 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.130643 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt4x5\" (UniqueName: \"kubernetes.io/projected/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-kube-api-access-qt4x5\") pod \"route-controller-manager-6576b87f9c-pzsn6\" (UID: \"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.138359 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wm8d5"] Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.170420 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwl4p\" (UniqueName: \"kubernetes.io/projected/38f31f08-dc12-4feb-8567-ab19705f0e16-kube-api-access-nwl4p\") pod \"machine-config-operator-74547568cd-pxqqx\" (UID: \"38f31f08-dc12-4feb-8567-ab19705f0e16\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxqqx" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.173326 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbrlk\" (UniqueName: \"kubernetes.io/projected/87e8f397-20cd-469f-924d-204ce1a8db47-kube-api-access-sbrlk\") pod \"authentication-operator-69f744f599-wz942\" (UID: \"87e8f397-20cd-469f-924d-204ce1a8db47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.188419 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-88srp"] Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.201141 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6lks\" (UniqueName: \"kubernetes.io/projected/44b1f98a-591c-49e4-9d2a-a0130f336528-kube-api-access-j6lks\") pod \"kube-storage-version-migrator-operator-b67b599dd-hr46r\" (UID: \"44b1f98a-591c-49e4-9d2a-a0130f336528\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hr46r" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.207797 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-m7l28"] Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.213055 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.216930 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.217105 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:47.717082303 +0000 UTC m=+154.885721914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.217423 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.218254 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:47.718240892 +0000 UTC m=+154.886880493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.221458 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkqhx\" (UniqueName: \"kubernetes.io/projected/8832f47a-79fe-4045-91d9-d42f21a2652f-kube-api-access-jkqhx\") pod \"etcd-operator-b45778765-xqnqh\" (UID: \"8832f47a-79fe-4045-91d9-d42f21a2652f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.237602 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdcck" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.238662 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gr2s6" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.251914 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg7zr\" (UniqueName: \"kubernetes.io/projected/38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95-kube-api-access-sg7zr\") pod \"olm-operator-6b444d44fb-hhs2z\" (UID: \"38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.252674 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fcvw7" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.267794 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fw9t\" (UniqueName: \"kubernetes.io/projected/f51bf74b-1d86-4a22-a355-f2c64a6516e5-kube-api-access-9fw9t\") pod \"console-operator-58897d9998-wcl2w\" (UID: \"f51bf74b-1d86-4a22-a355-f2c64a6516e5\") " pod="openshift-console-operator/console-operator-58897d9998-wcl2w" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.284594 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp7lh\" (UniqueName: \"kubernetes.io/projected/1baa0c14-c23b-401c-b20f-3789ff63a4c1-kube-api-access-gp7lh\") pod \"service-ca-9c57cc56f-qd9z7\" (UID: \"1baa0c14-c23b-401c-b20f-3789ff63a4c1\") " pod="openshift-service-ca/service-ca-9c57cc56f-qd9z7" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.292886 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b56t9\" (UniqueName: \"kubernetes.io/projected/80dd1d93-b2bd-4fad-b199-aa072c2c8216-kube-api-access-b56t9\") pod \"package-server-manager-789f6589d5-pqr7h\" (UID: \"80dd1d93-b2bd-4fad-b199-aa072c2c8216\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pqr7h" Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.297131 4693 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.297198 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f28a792f-4814-4a24-ab79-3a5b00adb25e-trusted-ca podName:f28a792f-4814-4a24-ab79-3a5b00adb25e nodeName:}" failed. No retries permitted until 2025-12-12 15:48:48.297180562 +0000 UTC m=+155.465820163 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f28a792f-4814-4a24-ab79-3a5b00adb25e-trusted-ca") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.298349 4693 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.298393 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/19166153-66f0-4f4f-8f4b-ef7af5a72770-config podName:19166153-66f0-4f4f-8f4b-ef7af5a72770 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:48.298381483 +0000 UTC m=+155.467021084 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/19166153-66f0-4f4f-8f4b-ef7af5a72770-config") pod "machine-approver-56656f9798-x6zxm" (UID: "19166153-66f0-4f4f-8f4b-ef7af5a72770") : failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.298418 4693 secret.go:188] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.298445 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19166153-66f0-4f4f-8f4b-ef7af5a72770-machine-approver-tls podName:19166153-66f0-4f4f-8f4b-ef7af5a72770 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:48.298436894 +0000 UTC m=+155.467076495 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/19166153-66f0-4f4f-8f4b-ef7af5a72770-machine-approver-tls") pod "machine-approver-56656f9798-x6zxm" (UID: "19166153-66f0-4f4f-8f4b-ef7af5a72770") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.297909 4693 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.300493 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/19166153-66f0-4f4f-8f4b-ef7af5a72770-auth-proxy-config podName:19166153-66f0-4f4f-8f4b-ef7af5a72770 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:48.300409655 +0000 UTC m=+155.469049256 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/19166153-66f0-4f4f-8f4b-ef7af5a72770-auth-proxy-config") pod "machine-approver-56656f9798-x6zxm" (UID: "19166153-66f0-4f4f-8f4b-ef7af5a72770") : failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.305607 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bz9v2" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.315646 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hr46r" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.316963 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl2tz\" (UniqueName: \"kubernetes.io/projected/f92dec3b-e25a-4f3f-a004-e85cc51093c5-kube-api-access-xl2tz\") pod \"collect-profiles-29425905-5fvqm\" (UID: \"f92dec3b-e25a-4f3f-a004-e85cc51093c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.318784 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.319211 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:47.819197585 +0000 UTC m=+154.987837186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.326385 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658pn"] Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.329200 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-spsbw" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.335142 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.335163 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtc78\" (UniqueName: \"kubernetes.io/projected/b59eec2f-4046-439e-a4c3-1201ccdd8cd5-kube-api-access-wtc78\") pod \"multus-admission-controller-857f4d67dd-qp87x\" (UID: \"b59eec2f-4046-439e-a4c3-1201ccdd8cd5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qp87x" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.341701 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-qp87x" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.348665 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.356628 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.359852 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnc9z\" (UniqueName: \"kubernetes.io/projected/19df2367-f186-4892-83e6-bee3c8177dc2-kube-api-access-gnc9z\") pod \"service-ca-operator-777779d784-bwgbj\" (UID: \"19df2367-f186-4892-83e6-bee3c8177dc2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bwgbj" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.370246 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-29ww6" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.373424 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p82t9\" (UniqueName: \"kubernetes.io/projected/d3b86c37-5764-4b23-b927-ad4a77885456-kube-api-access-p82t9\") pod \"catalog-operator-68c6474976-r6qvl\" (UID: \"d3b86c37-5764-4b23-b927-ad4a77885456\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.377476 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-qd9z7" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.381363 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.389596 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bwgbj" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.396031 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2c7m\" (UniqueName: \"kubernetes.io/projected/45a4ae20-7daa-42b4-9801-c9613c7fd508-kube-api-access-d2c7m\") pod \"machine-config-server-qnfsm\" (UID: \"45a4ae20-7daa-42b4-9801-c9613c7fd508\") " pod="openshift-machine-config-operator/machine-config-server-qnfsm" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.396894 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pqr7h" Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.400206 4693 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.400304 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/901e060a-e400-40cd-bd50-d8bfb7c5127a-trusted-ca podName:901e060a-e400-40cd-bd50-d8bfb7c5127a nodeName:}" failed. No retries permitted until 2025-12-12 15:48:48.40028689 +0000 UTC m=+155.568926491 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/901e060a-e400-40cd-bd50-d8bfb7c5127a-trusted-ca") pod "cluster-image-registry-operator-dc59b4c8b-ks47b" (UID: "901e060a-e400-40cd-bd50-d8bfb7c5127a") : failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.410842 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qnfsm" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.419722 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.419967 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:47.919956823 +0000 UTC m=+155.088596424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.432110 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrg2q\" (UniqueName: \"kubernetes.io/projected/61620225-2125-49da-94f6-f6ef9dd7e6ce-kube-api-access-lrg2q\") pod \"packageserver-d55dfcdfc-d86dw\" (UID: \"61620225-2125-49da-94f6-f6ef9dd7e6ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.432520 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxqqx" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.435963 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjwsj\" (UniqueName: \"kubernetes.io/projected/7b1c4746-f772-49d8-be11-9abc850ea7e2-kube-api-access-kjwsj\") pod \"marketplace-operator-79b997595-npwzs\" (UID: \"7b1c4746-f772-49d8-be11-9abc850ea7e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.437556 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.476862 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw9n5\" (UniqueName: \"kubernetes.io/projected/2872c20a-d73b-43e0-a4c4-dc6238f5d60b-kube-api-access-zw9n5\") pod \"dns-default-wd8bp\" (UID: \"2872c20a-d73b-43e0-a4c4-dc6238f5d60b\") " pod="openshift-dns/dns-default-wd8bp" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.494693 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5z4x\" (UniqueName: \"kubernetes.io/projected/ef8bea0e-6f25-4c4d-a294-f246fbff9926-kube-api-access-g5z4x\") pod \"csi-hostpathplugin-wwxcz\" (UID: \"ef8bea0e-6f25-4c4d-a294-f246fbff9926\") " pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.512269 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx7g4\" (UniqueName: \"kubernetes.io/projected/20d2a90c-e5d1-4fa3-bc2f-cba5f3ea0157-kube-api-access-lx7g4\") pod \"ingress-canary-42cgp\" (UID: \"20d2a90c-e5d1-4fa3-bc2f-cba5f3ea0157\") " pod="openshift-ingress-canary/ingress-canary-42cgp" Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.515907 4693 projected.go:288] Couldn't get configMap openshift-cluster-machine-approver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.515940 4693 projected.go:194] Error preparing data for projected volume kube-api-access-t5vnt for pod openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.516078 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19166153-66f0-4f4f-8f4b-ef7af5a72770-kube-api-access-t5vnt podName:19166153-66f0-4f4f-8f4b-ef7af5a72770 nodeName:}" failed. No retries permitted until 2025-12-12 15:48:48.016061082 +0000 UTC m=+155.184700683 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t5vnt" (UniqueName: "kubernetes.io/projected/19166153-66f0-4f4f-8f4b-ef7af5a72770-kube-api-access-t5vnt") pod "machine-approver-56656f9798-x6zxm" (UID: "19166153-66f0-4f4f-8f4b-ef7af5a72770") : failed to sync configmap cache: timed out waiting for the condition Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.520522 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.520755 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:48.020704311 +0000 UTC m=+155.189343912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.520948 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.521583 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:48.021568643 +0000 UTC m=+155.190208244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.528657 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.537338 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.558813 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.580092 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.597792 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.619005 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.623620 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.623954 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:48.123938702 +0000 UTC m=+155.292578303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.656590 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.662727 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.702804 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4"] Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.703012 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.704648 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k47hd"] Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.718003 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-42cgp" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.727683 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.727979 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wz942"] Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.728240 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:48.22821561 +0000 UTC m=+155.396855271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.743354 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.748672 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wd8bp" Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.768262 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t"] Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.786426 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nnqfp"] Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.829894 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.830082 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:48.330055945 +0000 UTC m=+155.498695546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.830242 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.830689 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:48.330679361 +0000 UTC m=+155.499318962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.848852 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdcck"] Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.931710 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.931925 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:48.431893431 +0000 UTC m=+155.600533042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:47 crc kubenswrapper[4693]: I1212 15:48:47.931992 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:47 crc kubenswrapper[4693]: E1212 15:48:47.932367 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:48.432356662 +0000 UTC m=+155.600996323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:47 crc kubenswrapper[4693]: W1212 15:48:47.948736 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62fa5de9_a571_40e5_a32c_e1708a428f19.slice/crio-540753413d4a3d5712c7d79c3ff238449bd282ce95cd3f0d28d75ff945d265e2 WatchSource:0}: Error finding container 540753413d4a3d5712c7d79c3ff238449bd282ce95cd3f0d28d75ff945d265e2: Status 404 returned error can't find the container with id 540753413d4a3d5712c7d79c3ff238449bd282ce95cd3f0d28d75ff945d265e2 Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.035066 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:48 crc kubenswrapper[4693]: E1212 15:48:48.035460 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:48.535426779 +0000 UTC m=+155.704066390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.035884 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.036077 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5vnt\" (UniqueName: \"kubernetes.io/projected/19166153-66f0-4f4f-8f4b-ef7af5a72770-kube-api-access-t5vnt\") pod \"machine-approver-56656f9798-x6zxm\" (UID: \"19166153-66f0-4f4f-8f4b-ef7af5a72770\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" Dec 12 15:48:48 crc kubenswrapper[4693]: E1212 15:48:48.037597 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:48.537585195 +0000 UTC m=+155.706224796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.084252 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5vnt\" (UniqueName: \"kubernetes.io/projected/19166153-66f0-4f4f-8f4b-ef7af5a72770-kube-api-access-t5vnt\") pod \"machine-approver-56656f9798-x6zxm\" (UID: \"19166153-66f0-4f4f-8f4b-ef7af5a72770\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.127713 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdcck" event={"ID":"4c536f6e-dc3f-407b-81bc-ad0febbae611","Type":"ContainerStarted","Data":"0608c4dce10bba8cd7a4bc9564eaa15a940592a83f05f57ac1b83e968522c711"} Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.130473 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k47hd" event={"ID":"1c9f6cb7-17aa-420e-a9b9-af42bd6e4caf","Type":"ContainerStarted","Data":"dcdee1af62f4480a4b4cd817a55a0b883f85c09e7a07244ea1dc2710bf426354"} Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.138767 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:48 crc kubenswrapper[4693]: E1212 15:48:48.139289 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:48.639251826 +0000 UTC m=+155.807891467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.146828 4693 generic.go:334] "Generic (PLEG): container finished" podID="31b7f38e-5f91-43bf-bba4-bc8592747704" containerID="544f4c78d89cac3a8cb01f6c885905cef245857ffc870912265829ad66609f89" exitCode=0 Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.146945 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d28jp" event={"ID":"31b7f38e-5f91-43bf-bba4-bc8592747704","Type":"ContainerDied","Data":"544f4c78d89cac3a8cb01f6c885905cef245857ffc870912265829ad66609f89"} Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.146978 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d28jp" event={"ID":"31b7f38e-5f91-43bf-bba4-bc8592747704","Type":"ContainerStarted","Data":"041162a51c2f21a0d0d0ee8600330ea1fd7b19cc55b7dd2dfc46bd844d528bd2"} Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.158584 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" event={"ID":"f8c75cc9-2bff-43c4-b8c8-838b67ea4874","Type":"ContainerStarted","Data":"15f428c3542ed1bdcfed059d4f67cb580c94342d1506b301cc113e6190601c90"} Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.164810 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wp4hr" event={"ID":"4321ba4d-fc67-4945-a86a-9b6f30ab66ce","Type":"ContainerStarted","Data":"78be9c55df2f1edf0de1bebb752b938154fb006896086fa34ae54d6eb8207866"} Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.166050 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-m7l28" event={"ID":"7037e0f8-e094-40a4-9188-7dc2fdd1b4a6","Type":"ContainerStarted","Data":"4455d24fa2db3226985cadc726a0ddd6ccb4fa7002301a67187d7daf505c31f7"} Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.166785 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" event={"ID":"94c146b4-f621-42ff-b0db-5e471b8938b6","Type":"ContainerStarted","Data":"975af464f57f3657186d34cde2a3b62612d46080984556c12cdb9020638df18a"} Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.172566 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" event={"ID":"87e8f397-20cd-469f-924d-204ce1a8db47","Type":"ContainerStarted","Data":"18934ddf0a871f51ab6a7424254b8d103dad063afbdfa6b7d010da2991d1136d"} Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.177062 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658pn" event={"ID":"6be45ad2-fe1c-4b29-8aa8-c5eec39978a3","Type":"ContainerStarted","Data":"87c14567c21c98aa7e27997d002517370e4ab55b675aeb79a699d18b307c6250"} Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.182591 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b7rfx" event={"ID":"c9efa1e6-826d-4d2f-8c65-5993738eb0b9","Type":"ContainerStarted","Data":"2504ee2eb534663ae4128b9c6c7104560c512cd6996038a2b50187792e4c7039"} Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.182628 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b7rfx" event={"ID":"c9efa1e6-826d-4d2f-8c65-5993738eb0b9","Type":"ContainerStarted","Data":"e8b68f4e5cb03a30f10c73c644d874372f922f080c66470b38185b25c38d2d12"} Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.188785 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nnqfp" event={"ID":"050ec804-2082-4b39-8699-28d1c1992425","Type":"ContainerStarted","Data":"9d07bc8a4ce91a937393ba5764e45fa46e056587fa6e8a2ca567ab7a3c5e691e"} Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.194011 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" event={"ID":"62fa5de9-a571-40e5-a32c-e1708a428f19","Type":"ContainerStarted","Data":"540753413d4a3d5712c7d79c3ff238449bd282ce95cd3f0d28d75ff945d265e2"} Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.213429 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2dvf" event={"ID":"28dc9895-00d3-4e72-930a-ea9b0ca468c4","Type":"ContainerStarted","Data":"5a185bd149f5ddcf11f7d34a343b1ac4636822b0e7690178189e478cae624273"} Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.216447 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qnfsm" event={"ID":"45a4ae20-7daa-42b4-9801-c9613c7fd508","Type":"ContainerStarted","Data":"d6df91653ce5f057661d90f7714330f596235b8fdd68ff8e65fbd1678563e5c7"} Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.236866 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" event={"ID":"f743d3ca-28a7-4e25-955f-1385b9ef8c05","Type":"ContainerStarted","Data":"68468d38e642ecd9302b1c4b7db986de32aea3df59ee282daba49eb7387c8979"} Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.244790 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:48 crc kubenswrapper[4693]: E1212 15:48:48.246799 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:48.746784847 +0000 UTC m=+155.915424448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.255242 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hfmz9" event={"ID":"49c82763-4d39-4424-8aa0-745158bd96c6","Type":"ContainerStarted","Data":"979d3ca0028d2fa82eb5aa011aa5c9b4c3540b7482648ad9eed0d95cac19d909"} Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.346433 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.347393 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/19166153-66f0-4f4f-8f4b-ef7af5a72770-auth-proxy-config\") pod \"machine-approver-56656f9798-x6zxm\" (UID: \"19166153-66f0-4f4f-8f4b-ef7af5a72770\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.347591 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19166153-66f0-4f4f-8f4b-ef7af5a72770-config\") pod \"machine-approver-56656f9798-x6zxm\" (UID: \"19166153-66f0-4f4f-8f4b-ef7af5a72770\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.347621 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f28a792f-4814-4a24-ab79-3a5b00adb25e-trusted-ca\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.347692 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/19166153-66f0-4f4f-8f4b-ef7af5a72770-machine-approver-tls\") pod \"machine-approver-56656f9798-x6zxm\" (UID: \"19166153-66f0-4f4f-8f4b-ef7af5a72770\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" Dec 12 15:48:48 crc kubenswrapper[4693]: E1212 15:48:48.348587 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:48.848567991 +0000 UTC m=+156.017207612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.351863 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/19166153-66f0-4f4f-8f4b-ef7af5a72770-auth-proxy-config\") pod \"machine-approver-56656f9798-x6zxm\" (UID: \"19166153-66f0-4f4f-8f4b-ef7af5a72770\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.354231 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19166153-66f0-4f4f-8f4b-ef7af5a72770-config\") pod \"machine-approver-56656f9798-x6zxm\" (UID: \"19166153-66f0-4f4f-8f4b-ef7af5a72770\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.355790 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f28a792f-4814-4a24-ab79-3a5b00adb25e-trusted-ca\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.362879 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/19166153-66f0-4f4f-8f4b-ef7af5a72770-machine-approver-tls\") pod \"machine-approver-56656f9798-x6zxm\" (UID: \"19166153-66f0-4f4f-8f4b-ef7af5a72770\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.441164 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.450034 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/901e060a-e400-40cd-bd50-d8bfb7c5127a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ks47b\" (UID: \"901e060a-e400-40cd-bd50-d8bfb7c5127a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ks47b" Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.450174 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.452648 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/901e060a-e400-40cd-bd50-d8bfb7c5127a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ks47b\" (UID: \"901e060a-e400-40cd-bd50-d8bfb7c5127a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ks47b" Dec 12 15:48:48 crc kubenswrapper[4693]: E1212 15:48:48.452758 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:48.952719645 +0000 UTC m=+156.121359246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.551911 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:48 crc kubenswrapper[4693]: E1212 15:48:48.552346 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:49.052325504 +0000 UTC m=+156.220965105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.589090 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658pn" podStartSLOduration=133.589058884 podStartE2EDuration="2m13.589058884s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:48.587045412 +0000 UTC m=+155.755685013" watchObservedRunningTime="2025-12-12 15:48:48.589058884 +0000 UTC m=+155.757698485" Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.653459 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:48 crc kubenswrapper[4693]: E1212 15:48:48.655396 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:49.15537244 +0000 UTC m=+156.324012051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.679962 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f2dvf" podStartSLOduration=133.679947569 podStartE2EDuration="2m13.679947569s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:48.678980204 +0000 UTC m=+155.847619805" watchObservedRunningTime="2025-12-12 15:48:48.679947569 +0000 UTC m=+155.848587170" Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.689538 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ks47b" Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.761549 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:48 crc kubenswrapper[4693]: E1212 15:48:48.761710 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:49.26167787 +0000 UTC m=+156.430317471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.761835 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:48 crc kubenswrapper[4693]: E1212 15:48:48.763479 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:49.263461425 +0000 UTC m=+156.432101026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.864716 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:48 crc kubenswrapper[4693]: E1212 15:48:48.865134 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:49.365114546 +0000 UTC m=+156.533754147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.935145 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gr2s6"] Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.953452 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.956374 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-29ww6"] Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.958401 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:48:48 crc kubenswrapper[4693]: [-]has-synced failed: reason withheld Dec 12 15:48:48 crc kubenswrapper[4693]: [+]process-running ok Dec 12 15:48:48 crc kubenswrapper[4693]: healthz check failed Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.958444 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.966315 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:48 crc kubenswrapper[4693]: E1212 15:48:48.966660 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:49.466647284 +0000 UTC m=+156.635286885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:48 crc kubenswrapper[4693]: I1212 15:48:48.997561 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z"] Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.068507 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-hfmz9" podStartSLOduration=134.068490349 podStartE2EDuration="2m14.068490349s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:49.037912617 +0000 UTC m=+156.206552248" watchObservedRunningTime="2025-12-12 15:48:49.068490349 +0000 UTC m=+156.237129950" Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.071090 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:49 crc kubenswrapper[4693]: E1212 15:48:49.071411 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:49.571395934 +0000 UTC m=+156.740035535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.075437 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wcl2w"] Dec 12 15:48:49 crc kubenswrapper[4693]: W1212 15:48:49.100032 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf51bf74b_1d86_4a22_a355_f2c64a6516e5.slice/crio-559fc075f112fa191311adb6d3f15790bb866c06385b6b506d2bb4a50aef8013 WatchSource:0}: Error finding container 559fc075f112fa191311adb6d3f15790bb866c06385b6b506d2bb4a50aef8013: Status 404 returned error can't find the container with id 559fc075f112fa191311adb6d3f15790bb866c06385b6b506d2bb4a50aef8013 Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.174965 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:49 crc kubenswrapper[4693]: E1212 15:48:49.175311 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:49.675298902 +0000 UTC m=+156.843938503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.244262 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4mbr" podStartSLOduration=134.244236516 podStartE2EDuration="2m14.244236516s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:49.235203304 +0000 UTC m=+156.403842905" watchObservedRunningTime="2025-12-12 15:48:49.244236516 +0000 UTC m=+156.412876127" Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.286804 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:49 crc kubenswrapper[4693]: E1212 15:48:49.287167 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:49.787147353 +0000 UTC m=+156.955786954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.307937 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-b7rfx" podStartSLOduration=135.307921695 podStartE2EDuration="2m15.307921695s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:49.306882258 +0000 UTC m=+156.475521859" watchObservedRunningTime="2025-12-12 15:48:49.307921695 +0000 UTC m=+156.476561286" Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.356867 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" event={"ID":"38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95","Type":"ContainerStarted","Data":"a1a3ca9ea9069c933ba60592f62cd310493a9da88a50f664f3f1d883b3652238"} Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.391097 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:49 crc kubenswrapper[4693]: E1212 15:48:49.391536 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:49.891519954 +0000 UTC m=+157.060159555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.400359 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" event={"ID":"19166153-66f0-4f4f-8f4b-ef7af5a72770","Type":"ContainerStarted","Data":"390df377520c39238112324b5fbefc41f00b67d0a5f1bf0a4011bd5c0487e9d5"} Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.400422 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" event={"ID":"19166153-66f0-4f4f-8f4b-ef7af5a72770","Type":"ContainerStarted","Data":"b6331cf4537942aadd3390834af1c2b26c5641666c65633b17929603d0d74a90"} Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.463517 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" event={"ID":"f51bf74b-1d86-4a22-a355-f2c64a6516e5","Type":"ContainerStarted","Data":"559fc075f112fa191311adb6d3f15790bb866c06385b6b506d2bb4a50aef8013"} Dec 12 15:48:49 crc kubenswrapper[4693]: W1212 15:48:49.464598 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf92dec3b_e25a_4f3f_a004_e85cc51093c5.slice/crio-62b19552fd0c1ead6c1f1b1e53acebe6500da73d831e96df13d8746f5b255625 WatchSource:0}: Error finding container 62b19552fd0c1ead6c1f1b1e53acebe6500da73d831e96df13d8746f5b255625: Status 404 returned error can't find the container with id 62b19552fd0c1ead6c1f1b1e53acebe6500da73d831e96df13d8746f5b255625 Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.494107 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:49 crc kubenswrapper[4693]: E1212 15:48:49.496725 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:49.996701845 +0000 UTC m=+157.165341446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.509001 4693 generic.go:334] "Generic (PLEG): container finished" podID="62fa5de9-a571-40e5-a32c-e1708a428f19" containerID="be34bd413df95183e0effc3e6c2e57f2f44dafa55f99660ae68a18bf2c3a1d4f" exitCode=0 Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.509105 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" event={"ID":"62fa5de9-a571-40e5-a32c-e1708a428f19","Type":"ContainerDied","Data":"be34bd413df95183e0effc3e6c2e57f2f44dafa55f99660ae68a18bf2c3a1d4f"} Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.516942 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm"] Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.556265 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6"] Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.565611 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qp87x"] Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.565666 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wwxcz"] Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.574097 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl"] Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.574165 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-npwzs"] Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.576572 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-qd9z7"] Dec 12 15:48:49 crc kubenswrapper[4693]: W1212 15:48:49.585474 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb59eec2f_4046_439e_a4c3_1201ccdd8cd5.slice/crio-df0dd9250658a11d4b9bd4fc9634681d1d1369972d79934746a5e6cd23b309e1 WatchSource:0}: Error finding container df0dd9250658a11d4b9bd4fc9634681d1d1369972d79934746a5e6cd23b309e1: Status 404 returned error can't find the container with id df0dd9250658a11d4b9bd4fc9634681d1d1369972d79934746a5e6cd23b309e1 Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.587136 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-m7l28" event={"ID":"7037e0f8-e094-40a4-9188-7dc2fdd1b4a6","Type":"ContainerStarted","Data":"6c8f7b6860dec6e5604f7117e469c7238fb934b78f88fafc44d81c9d968e4c46"} Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.587170 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-m7l28" event={"ID":"7037e0f8-e094-40a4-9188-7dc2fdd1b4a6","Type":"ContainerStarted","Data":"1ad3995f4b9cd71e838d4b3726ff4923fb2f2c7f23fb8bd5536081e1476cd884"} Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.589979 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hr46r"] Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.606510 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:49 crc kubenswrapper[4693]: E1212 15:48:49.606952 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:50.106936265 +0000 UTC m=+157.275575866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.633767 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pxqqx"] Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.634655 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d28jp" event={"ID":"31b7f38e-5f91-43bf-bba4-bc8592747704","Type":"ContainerStarted","Data":"17c2ef61d89725eb2e4a3513dc0f1480d752ccdc83749eb49151f75c27d650ec"} Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.641628 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bwgbj"] Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.643997 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" event={"ID":"f743d3ca-28a7-4e25-955f-1385b9ef8c05","Type":"ContainerStarted","Data":"0d9dedc7417633b8172baed76d019a70c67ede4b5c8c382565e85cfb9543e6ac"} Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.645265 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fcvw7"] Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.645692 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.648561 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-spsbw"] Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.660525 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qnfsm" event={"ID":"45a4ae20-7daa-42b4-9801-c9613c7fd508","Type":"ContainerStarted","Data":"d246920693bb5b3a7b8da014e67d9415df3ac1d35fe88ee798e1d9cc7efc7a2a"} Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.667206 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bz9v2"] Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.667796 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" event={"ID":"f8c75cc9-2bff-43c4-b8c8-838b67ea4874","Type":"ContainerStarted","Data":"439674ffc15f1bd26f24b53911af69afdd189f71348eaa1d5192ad169b85fb5a"} Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.668683 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.675612 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-42cgp"] Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.679002 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.685891 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-m7l28" podStartSLOduration=134.685847474 podStartE2EDuration="2m14.685847474s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:49.634730896 +0000 UTC m=+156.803370507" watchObservedRunningTime="2025-12-12 15:48:49.685847474 +0000 UTC m=+156.854487075" Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.687477 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xqnqh"] Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.690490 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658pn" event={"ID":"6be45ad2-fe1c-4b29-8aa8-c5eec39978a3","Type":"ContainerStarted","Data":"fda7549c6b60a2d78bd0d2dcaa25dd7b03f49af5be644f7c971ba695a656fc2b"} Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.695337 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pqr7h"] Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.699400 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.699652 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nnqfp" event={"ID":"050ec804-2082-4b39-8699-28d1c1992425","Type":"ContainerStarted","Data":"584f6e4ed0ec61fee1d7b0f96ff57e9e200bab1e1de92d0f4ee942048e8af7a4"} Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.703222 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" podStartSLOduration=135.703202318 podStartE2EDuration="2m15.703202318s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:49.691130159 +0000 UTC m=+156.859769760" watchObservedRunningTime="2025-12-12 15:48:49.703202318 +0000 UTC m=+156.871841919" Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.709110 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gr2s6" event={"ID":"c2850071-0204-4052-b9f3-863243d3300b","Type":"ContainerStarted","Data":"350fbcbaf6e1f2adc1ac0c70d45f039bb520ae77b4b218ae404e248cc6368759"} Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.709255 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gr2s6" event={"ID":"c2850071-0204-4052-b9f3-863243d3300b","Type":"ContainerStarted","Data":"79903bd51f9c6c1f3b9c018f736b3f6634ad392db8aa62d76109a6937ef340c3"} Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.709780 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:49 crc kubenswrapper[4693]: E1212 15:48:49.710383 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:50.210353441 +0000 UTC m=+157.378993052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.711545 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:49 crc kubenswrapper[4693]: E1212 15:48:49.714632 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:50.21462271 +0000 UTC m=+157.383262311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.715862 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdcck" event={"ID":"4c536f6e-dc3f-407b-81bc-ad0febbae611","Type":"ContainerStarted","Data":"c122b7677c574c06796ed9b7c7067de360ed13ebe01c2fa576f49c1edda9f77f"} Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.724963 4693 generic.go:334] "Generic (PLEG): container finished" podID="94c146b4-f621-42ff-b0db-5e471b8938b6" containerID="1a626be99604dd97505c7f8ff8681fd72d90e9c7cfde96529dd68284d48dc8da" exitCode=0 Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.725044 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" event={"ID":"94c146b4-f621-42ff-b0db-5e471b8938b6","Type":"ContainerDied","Data":"1a626be99604dd97505c7f8ff8681fd72d90e9c7cfde96529dd68284d48dc8da"} Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.731527 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" podStartSLOduration=135.731020209 podStartE2EDuration="2m15.731020209s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:49.722613624 +0000 UTC m=+156.891253225" watchObservedRunningTime="2025-12-12 15:48:49.731020209 +0000 UTC m=+156.899659820" Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.749420 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-29ww6" event={"ID":"4f59b453-c693-4382-b7f5-82d3c8ee48e9","Type":"ContainerStarted","Data":"e3eac311f0d0b04c347a2416fd3e0f8537c52e28d9bc225c04bd7552eb004330"} Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.781059 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-qnfsm" podStartSLOduration=5.781042049 podStartE2EDuration="5.781042049s" podCreationTimestamp="2025-12-12 15:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:49.780358211 +0000 UTC m=+156.948997812" watchObservedRunningTime="2025-12-12 15:48:49.781042049 +0000 UTC m=+156.949681650" Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.781607 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wp4hr" event={"ID":"4321ba4d-fc67-4945-a86a-9b6f30ab66ce","Type":"ContainerStarted","Data":"bbc5db35f600649243644f2c208891f7ca5385e81eb30b0078f004971e24618c"} Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.816179 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:49 crc kubenswrapper[4693]: E1212 15:48:49.817057 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:50.31703681 +0000 UTC m=+157.485676411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.829164 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k47hd" event={"ID":"1c9f6cb7-17aa-420e-a9b9-af42bd6e4caf","Type":"ContainerStarted","Data":"07b40316631b7f4ef985e9563d6efe2e7f399e51042e2f3ab248bc757e1f781b"} Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.835052 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qdcck" podStartSLOduration=135.83503147 podStartE2EDuration="2m15.83503147s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:49.828070912 +0000 UTC m=+156.996710513" watchObservedRunningTime="2025-12-12 15:48:49.83503147 +0000 UTC m=+157.003671071" Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.882135 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw"] Dec 12 15:48:49 crc kubenswrapper[4693]: W1212 15:48:49.905423 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61620225_2125_49da_94f6_f6ef9dd7e6ce.slice/crio-e887e4b801166342abb60254d03869e3dd647304aa69617e4eb3f27b294f9093 WatchSource:0}: Error finding container e887e4b801166342abb60254d03869e3dd647304aa69617e4eb3f27b294f9093: Status 404 returned error can't find the container with id e887e4b801166342abb60254d03869e3dd647304aa69617e4eb3f27b294f9093 Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.907266 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" event={"ID":"87e8f397-20cd-469f-924d-204ce1a8db47","Type":"ContainerStarted","Data":"a5668880ee0e6829c325ff86e69fe77011d67cc1a0cdee6b6a6c956ccabaefdc"} Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.918494 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:49 crc kubenswrapper[4693]: E1212 15:48:49.919794 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:50.419774017 +0000 UTC m=+157.588413618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.943361 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wd8bp"] Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.967673 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nnqfp" podStartSLOduration=135.967657002 podStartE2EDuration="2m15.967657002s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:49.906621191 +0000 UTC m=+157.075260802" watchObservedRunningTime="2025-12-12 15:48:49.967657002 +0000 UTC m=+157.136296603" Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.975249 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ks47b"] Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.980975 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k47hd" podStartSLOduration=134.980957383 podStartE2EDuration="2m14.980957383s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:49.965320123 +0000 UTC m=+157.133959724" watchObservedRunningTime="2025-12-12 15:48:49.980957383 +0000 UTC m=+157.149596984" Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.981202 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:48:49 crc kubenswrapper[4693]: [-]has-synced failed: reason withheld Dec 12 15:48:49 crc kubenswrapper[4693]: [+]process-running ok Dec 12 15:48:49 crc kubenswrapper[4693]: healthz check failed Dec 12 15:48:49 crc kubenswrapper[4693]: I1212 15:48:49.981253 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:48:50 crc kubenswrapper[4693]: I1212 15:48:49.993201 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-wp4hr" podStartSLOduration=135.993182915 podStartE2EDuration="2m15.993182915s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:49.988019813 +0000 UTC m=+157.156659424" watchObservedRunningTime="2025-12-12 15:48:49.993182915 +0000 UTC m=+157.161822516" Dec 12 15:48:50 crc kubenswrapper[4693]: I1212 15:48:50.022204 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:50 crc kubenswrapper[4693]: E1212 15:48:50.023735 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:50.523716557 +0000 UTC m=+157.692356168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:50 crc kubenswrapper[4693]: I1212 15:48:50.026784 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" podStartSLOduration=136.026769965 podStartE2EDuration="2m16.026769965s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:50.015693531 +0000 UTC m=+157.184333142" watchObservedRunningTime="2025-12-12 15:48:50.026769965 +0000 UTC m=+157.195409566" Dec 12 15:48:50 crc kubenswrapper[4693]: I1212 15:48:50.127624 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:50 crc kubenswrapper[4693]: E1212 15:48:50.128190 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:50.628178639 +0000 UTC m=+157.796818230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:50 crc kubenswrapper[4693]: I1212 15:48:50.230867 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:50 crc kubenswrapper[4693]: E1212 15:48:50.231305 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:50.731285967 +0000 UTC m=+157.899925568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:50 crc kubenswrapper[4693]: I1212 15:48:50.332308 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:50 crc kubenswrapper[4693]: E1212 15:48:50.332793 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:50.832781554 +0000 UTC m=+158.001421155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:50 crc kubenswrapper[4693]: I1212 15:48:50.435315 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:50 crc kubenswrapper[4693]: E1212 15:48:50.435629 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:50.935612984 +0000 UTC m=+158.104252585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:50 crc kubenswrapper[4693]: I1212 15:48:50.536657 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:50 crc kubenswrapper[4693]: E1212 15:48:50.537090 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:51.03707329 +0000 UTC m=+158.205712891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:50 crc kubenswrapper[4693]: I1212 15:48:50.639830 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:50 crc kubenswrapper[4693]: E1212 15:48:50.640242 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:51.140224099 +0000 UTC m=+158.308863700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:50 crc kubenswrapper[4693]: I1212 15:48:50.748104 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:50 crc kubenswrapper[4693]: E1212 15:48:50.748514 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:51.248498869 +0000 UTC m=+158.417138460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:50 crc kubenswrapper[4693]: I1212 15:48:50.848970 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:50 crc kubenswrapper[4693]: E1212 15:48:50.849422 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:51.349402591 +0000 UTC m=+158.518042192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:50 crc kubenswrapper[4693]: I1212 15:48:50.920198 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" event={"ID":"61620225-2125-49da-94f6-f6ef9dd7e6ce","Type":"ContainerStarted","Data":"e887e4b801166342abb60254d03869e3dd647304aa69617e4eb3f27b294f9093"} Dec 12 15:48:50 crc kubenswrapper[4693]: I1212 15:48:50.923221 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pqr7h" event={"ID":"80dd1d93-b2bd-4fad-b199-aa072c2c8216","Type":"ContainerStarted","Data":"1e167007abda5d46744247cd14cbba11a75781c7318006d86ffe54abe2e6cba7"} Dec 12 15:48:50 crc kubenswrapper[4693]: I1212 15:48:50.927491 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-42cgp" event={"ID":"20d2a90c-e5d1-4fa3-bc2f-cba5f3ea0157","Type":"ContainerStarted","Data":"5e7fa46dec9efa311059fb74d274f76d304ecc8df389fa532871573ef19f7719"} Dec 12 15:48:50 crc kubenswrapper[4693]: I1212 15:48:50.927547 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-42cgp" event={"ID":"20d2a90c-e5d1-4fa3-bc2f-cba5f3ea0157","Type":"ContainerStarted","Data":"1f8cff95e892283a657c9065d64f5a1a630969ad4be49994c6be8c31f014311f"} Dec 12 15:48:50 crc kubenswrapper[4693]: I1212 15:48:50.948098 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" event={"ID":"62fa5de9-a571-40e5-a32c-e1708a428f19","Type":"ContainerStarted","Data":"f256f43349cf28f83fea42583eab372fb88e9e09f461bf9a43c98da70fedc314"} Dec 12 15:48:50 crc kubenswrapper[4693]: I1212 15:48:50.948140 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" Dec 12 15:48:50 crc kubenswrapper[4693]: I1212 15:48:50.949226 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qp87x" event={"ID":"b59eec2f-4046-439e-a4c3-1201ccdd8cd5","Type":"ContainerStarted","Data":"df0dd9250658a11d4b9bd4fc9634681d1d1369972d79934746a5e6cd23b309e1"} Dec 12 15:48:50 crc kubenswrapper[4693]: I1212 15:48:50.950439 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:50 crc kubenswrapper[4693]: E1212 15:48:50.950772 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:51.450759734 +0000 UTC m=+158.619399335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:50 crc kubenswrapper[4693]: I1212 15:48:50.952523 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wd8bp" event={"ID":"2872c20a-d73b-43e0-a4c4-dc6238f5d60b","Type":"ContainerStarted","Data":"255b8c621190d5779c4c6f3f9c2c7e8d946c8806349f24b1c51daaa4d5cd088d"} Dec 12 15:48:50 crc kubenswrapper[4693]: I1212 15:48:50.966998 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:48:50 crc kubenswrapper[4693]: [-]has-synced failed: reason withheld Dec 12 15:48:50 crc kubenswrapper[4693]: [+]process-running ok Dec 12 15:48:50 crc kubenswrapper[4693]: healthz check failed Dec 12 15:48:50 crc kubenswrapper[4693]: I1212 15:48:50.967070 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.016744 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-42cgp" podStartSLOduration=7.016724832 podStartE2EDuration="7.016724832s" podCreationTimestamp="2025-12-12 15:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:50.967301967 +0000 UTC m=+158.135941588" watchObservedRunningTime="2025-12-12 15:48:51.016724832 +0000 UTC m=+158.185364433" Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.018339 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" podStartSLOduration=137.018329943 podStartE2EDuration="2m17.018329943s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:51.01744918 +0000 UTC m=+158.186088781" watchObservedRunningTime="2025-12-12 15:48:51.018329943 +0000 UTC m=+158.186969544" Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.044746 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm" event={"ID":"f92dec3b-e25a-4f3f-a004-e85cc51093c5","Type":"ContainerStarted","Data":"2ad4ffa31387f2ba9710686274b3a2b69a805fa8b2da63f2e2e2678012440018"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.044789 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm" event={"ID":"f92dec3b-e25a-4f3f-a004-e85cc51093c5","Type":"ContainerStarted","Data":"62b19552fd0c1ead6c1f1b1e53acebe6500da73d831e96df13d8746f5b255625"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.048798 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ks47b" event={"ID":"901e060a-e400-40cd-bd50-d8bfb7c5127a","Type":"ContainerStarted","Data":"3acf0857c18be90ceb81fca371d185b2f256d2e6177313e335eaa8b3bd769b2d"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.048854 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ks47b" event={"ID":"901e060a-e400-40cd-bd50-d8bfb7c5127a","Type":"ContainerStarted","Data":"cb8ab420b40afe520873c0243c26fa2cc175792b2ab410586cda239d89fe96b2"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.051857 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:51 crc kubenswrapper[4693]: E1212 15:48:51.053228 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:51.552984079 +0000 UTC m=+158.721623690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.101577 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gr2s6" event={"ID":"c2850071-0204-4052-b9f3-863243d3300b","Type":"ContainerStarted","Data":"e1908757293fed494d07b535e5a266588ade65481bf2993e796eb6fb03f969eb"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.103940 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm" podStartSLOduration=136.103927342 podStartE2EDuration="2m16.103927342s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:51.101627764 +0000 UTC m=+158.270267365" watchObservedRunningTime="2025-12-12 15:48:51.103927342 +0000 UTC m=+158.272566943" Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.114426 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" event={"ID":"ef8bea0e-6f25-4c4d-a294-f246fbff9926","Type":"ContainerStarted","Data":"898aed2d6d14c01d8f7b48b5397170dc8223ba71c048bb2819fde0bd514517b8"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.117697 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" event={"ID":"38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95","Type":"ContainerStarted","Data":"704e5981a535815e3d0aa2b0ef343d5ed69f70b480077e26e33a1c4fa93c4793"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.118345 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.128128 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" event={"ID":"f51bf74b-1d86-4a22-a355-f2c64a6516e5","Type":"ContainerStarted","Data":"5dc332e8efc65c41ab73c49cd3f3bb76270165246a873cf2098c2facbefb5af5"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.128906 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.129843 4693 patch_prober.go:28] interesting pod/console-operator-58897d9998-wcl2w container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/readyz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.129883 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" podUID="f51bf74b-1d86-4a22-a355-f2c64a6516e5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/readyz\": dial tcp 10.217.0.34:8443: connect: connection refused" Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.139353 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fcvw7" event={"ID":"9675a84b-88dc-4a3c-8fe9-070088ada9b1","Type":"ContainerStarted","Data":"77ff867f5f1c3e7fe8051a904ee36f6e9abb75ad748ef746f4385f655bf9b598"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.147189 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.156104 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:51 crc kubenswrapper[4693]: E1212 15:48:51.157301 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:51.657287998 +0000 UTC m=+158.825927599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.168848 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ks47b" podStartSLOduration=137.168825493 podStartE2EDuration="2m17.168825493s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:51.154965398 +0000 UTC m=+158.323604999" watchObservedRunningTime="2025-12-12 15:48:51.168825493 +0000 UTC m=+158.337465094" Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.169904 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-spsbw" event={"ID":"d64fcdf8-100d-4628-beb2-126a10b8f71c","Type":"ContainerStarted","Data":"c521520635036987459f53a61657170ace0d41ca9f03f820a241ea54ed1d771a"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.169952 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-spsbw" event={"ID":"d64fcdf8-100d-4628-beb2-126a10b8f71c","Type":"ContainerStarted","Data":"794631460a34e411b7947fe32c938e80ddfc03b458fc53249bc62830922f2025"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.189909 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxqqx" event={"ID":"38f31f08-dc12-4feb-8567-ab19705f0e16","Type":"ContainerStarted","Data":"19af9a0eaf5de514f73ede333779e3ec2606e9a603156f8f2ebfeab5e0fe36de"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.189954 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxqqx" event={"ID":"38f31f08-dc12-4feb-8567-ab19705f0e16","Type":"ContainerStarted","Data":"6bcea505144a3a637d679a6253ea378ad173b4094fb225148c8eb934b7d2a0f0"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.191797 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-qd9z7" event={"ID":"1baa0c14-c23b-401c-b20f-3789ff63a4c1","Type":"ContainerStarted","Data":"eaf1a9e6fee913c4c2036a881ea20451ecaf0cb99635982315bea044e2c0410e"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.191852 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-qd9z7" event={"ID":"1baa0c14-c23b-401c-b20f-3789ff63a4c1","Type":"ContainerStarted","Data":"d4cf630c59ea55702e37ac0cc1e39bf36f90b2dd7f83cc009b415a43131c86c4"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.193928 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hr46r" event={"ID":"44b1f98a-591c-49e4-9d2a-a0130f336528","Type":"ContainerStarted","Data":"972d24bf36d41ac07707d09b48a0b78134453683c929e373e855fa44c20417a7"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.215519 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bz9v2" event={"ID":"586d0874-ebe1-41db-b596-1dfed12b2b94","Type":"ContainerStarted","Data":"f22621976ec9818eb3dac9e2052782b703a33dd774c92ca306be25569680a044"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.215608 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" podStartSLOduration=137.215576099 podStartE2EDuration="2m17.215576099s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:51.21484421 +0000 UTC m=+158.383483811" watchObservedRunningTime="2025-12-12 15:48:51.215576099 +0000 UTC m=+158.384215700" Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.216837 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl" event={"ID":"d3b86c37-5764-4b23-b927-ad4a77885456","Type":"ContainerStarted","Data":"a09c13971913d7188598ad3f1497e284d84ff5e60bd73b154294ca404e29cccf"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.216874 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl" event={"ID":"d3b86c37-5764-4b23-b927-ad4a77885456","Type":"ContainerStarted","Data":"d17fc6156e906a9bfea23f5146e5e71ea4c2187b59e1969eac4fd26639597eee"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.217716 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl" Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.219191 4693 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r6qvl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.219228 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl" podUID="d3b86c37-5764-4b23-b927-ad4a77885456" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.236698 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d28jp" event={"ID":"31b7f38e-5f91-43bf-bba4-bc8592747704","Type":"ContainerStarted","Data":"7a917b589e5eaa823b9d8652115ea56502301ede55c5dc755b5feda843a25471"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.244065 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" event={"ID":"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112","Type":"ContainerStarted","Data":"a3bcd1d51e62856ef6ca0239ee5c9ddf9d1488a0b7f21ca8e1262244a8738102"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.244135 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" event={"ID":"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112","Type":"ContainerStarted","Data":"4263cd488490f654e64a8b14ee486c715852b9bc053c67477c4c0c779eaaa1e4"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.245405 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.247120 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" event={"ID":"7b1c4746-f772-49d8-be11-9abc850ea7e2","Type":"ContainerStarted","Data":"7ca443e5dbb51dff5a7fd4d4704e9349bc48ab8c2073afe678d2afe7fb0a3d00"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.248251 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bwgbj" event={"ID":"19df2367-f186-4892-83e6-bee3c8177dc2","Type":"ContainerStarted","Data":"0957c2854170956ccd7c455b092ecdde3a058d03173c017bdec7f22b4cfcc6be"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.257321 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:51 crc kubenswrapper[4693]: E1212 15:48:51.258411 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:51.758395524 +0000 UTC m=+158.927035125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.276568 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-29ww6" event={"ID":"4f59b453-c693-4382-b7f5-82d3c8ee48e9","Type":"ContainerStarted","Data":"3ef4d7f5eb65f9e897458d261629f5f74e55dfc9a404b0b47c48676b56f06981"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.298126 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" podStartSLOduration=136.29810845 podStartE2EDuration="2m16.29810845s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:51.27542449 +0000 UTC m=+158.444064091" watchObservedRunningTime="2025-12-12 15:48:51.29810845 +0000 UTC m=+158.466748051" Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.331581 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" event={"ID":"19166153-66f0-4f4f-8f4b-ef7af5a72770","Type":"ContainerStarted","Data":"51aa2afa7fa2d637d8f5fe18e04e9eef7dab2535bba9046af2f635b00aa2debd"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.338106 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gr2s6" podStartSLOduration=136.338083363 podStartE2EDuration="2m16.338083363s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:51.3270112 +0000 UTC m=+158.495650801" watchObservedRunningTime="2025-12-12 15:48:51.338083363 +0000 UTC m=+158.506722964" Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.383212 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:51 crc kubenswrapper[4693]: E1212 15:48:51.389241 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:51.889227542 +0000 UTC m=+159.057867143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.411267 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" podStartSLOduration=136.411244645 podStartE2EDuration="2m16.411244645s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:51.372074923 +0000 UTC m=+158.540714534" watchObservedRunningTime="2025-12-12 15:48:51.411244645 +0000 UTC m=+158.579884256" Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.458833 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl" podStartSLOduration=136.458813372 podStartE2EDuration="2m16.458813372s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:51.412453046 +0000 UTC m=+158.581092647" watchObservedRunningTime="2025-12-12 15:48:51.458813372 +0000 UTC m=+158.627452973" Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.459493 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-29ww6" podStartSLOduration=136.459485879 podStartE2EDuration="2m16.459485879s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:51.45680849 +0000 UTC m=+158.625448091" watchObservedRunningTime="2025-12-12 15:48:51.459485879 +0000 UTC m=+158.628125480" Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.477589 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" event={"ID":"8832f47a-79fe-4045-91d9-d42f21a2652f","Type":"ContainerStarted","Data":"64a01360c4322234783a8881bba7cfe40f2f3ee6b1358b1a6a46b085f4f85a82"} Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.492907 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:51 crc kubenswrapper[4693]: E1212 15:48:51.494251 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:51.994235068 +0000 UTC m=+159.162874669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.531675 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-qd9z7" podStartSLOduration=136.531658626 podStartE2EDuration="2m16.531658626s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:51.489440415 +0000 UTC m=+158.658080016" watchObservedRunningTime="2025-12-12 15:48:51.531658626 +0000 UTC m=+158.700298227" Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.575018 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-d28jp" podStartSLOduration=137.574996004 podStartE2EDuration="2m17.574996004s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:51.571978737 +0000 UTC m=+158.740618338" watchObservedRunningTime="2025-12-12 15:48:51.574996004 +0000 UTC m=+158.743635615" Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.596106 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:51 crc kubenswrapper[4693]: E1212 15:48:51.596541 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:52.096526115 +0000 UTC m=+159.265165716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.697592 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:51 crc kubenswrapper[4693]: E1212 15:48:51.698661 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:52.198626967 +0000 UTC m=+159.367266568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.801012 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:51 crc kubenswrapper[4693]: E1212 15:48:51.801507 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:52.301489939 +0000 UTC m=+159.470129550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.853437 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.853507 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.904939 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:51 crc kubenswrapper[4693]: E1212 15:48:51.905091 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:52.405060119 +0000 UTC m=+159.573699720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.905128 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:51 crc kubenswrapper[4693]: E1212 15:48:51.905550 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:52.405535131 +0000 UTC m=+159.574174732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.966052 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:48:51 crc kubenswrapper[4693]: [-]has-synced failed: reason withheld Dec 12 15:48:51 crc kubenswrapper[4693]: [+]process-running ok Dec 12 15:48:51 crc kubenswrapper[4693]: healthz check failed Dec 12 15:48:51 crc kubenswrapper[4693]: I1212 15:48:51.966510 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.006587 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:52 crc kubenswrapper[4693]: E1212 15:48:52.006919 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:52.506903354 +0000 UTC m=+159.675542955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.085758 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.109252 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:52 crc kubenswrapper[4693]: E1212 15:48:52.109672 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:52.609656713 +0000 UTC m=+159.778296314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.199221 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-x6zxm" podStartSLOduration=138.199201084 podStartE2EDuration="2m18.199201084s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:51.621081073 +0000 UTC m=+158.789720674" watchObservedRunningTime="2025-12-12 15:48:52.199201084 +0000 UTC m=+159.367840685" Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.209974 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:52 crc kubenswrapper[4693]: E1212 15:48:52.210329 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:52.710313128 +0000 UTC m=+159.878952729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.312048 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:52 crc kubenswrapper[4693]: E1212 15:48:52.312416 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:52.81239844 +0000 UTC m=+159.981038041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.413761 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:52 crc kubenswrapper[4693]: E1212 15:48:52.414047 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:52.91402838 +0000 UTC m=+160.082667981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.428668 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qp87x" event={"ID":"b59eec2f-4046-439e-a4c3-1201ccdd8cd5","Type":"ContainerStarted","Data":"c5f633b51c61de3c500c03b82167d9dd8951bcb611b2aade49a24dbc418db6d0"} Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.432123 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" event={"ID":"94c146b4-f621-42ff-b0db-5e471b8938b6","Type":"ContainerStarted","Data":"1e5a9142ea4d59cfe8f1ee127e57ed16e2525d3728ac655531093f8c29a7947d"} Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.446341 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wd8bp" event={"ID":"2872c20a-d73b-43e0-a4c4-dc6238f5d60b","Type":"ContainerStarted","Data":"ec77158ca1b180cdc311db1a1d21292c0a7a6cb9527e52c86436f7351ee04895"} Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.447886 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" event={"ID":"8832f47a-79fe-4045-91d9-d42f21a2652f","Type":"ContainerStarted","Data":"6a5179fccd8d1b0974f0437ed51d8d5fa217aa4bf2cfd172d0bff452b39a950d"} Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.449734 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" event={"ID":"ef8bea0e-6f25-4c4d-a294-f246fbff9926","Type":"ContainerStarted","Data":"e90c200373cabc4ea720ea9310649a3f9d35c2d4a46a4bcb9abbc87e730ac6bb"} Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.450915 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" event={"ID":"7b1c4746-f772-49d8-be11-9abc850ea7e2","Type":"ContainerStarted","Data":"1d1f1c715f9ccd1a4813d54250ca62c09726b8a742de00ccb60a1693b4826edf"} Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.451751 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.458477 4693 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-npwzs container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.458544 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" podUID="7b1c4746-f772-49d8-be11-9abc850ea7e2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.471625 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" event={"ID":"61620225-2125-49da-94f6-f6ef9dd7e6ce","Type":"ContainerStarted","Data":"82d8ce9a09e27e3be2af59e50054920230529cb0173716e36fd4edafc1822555"} Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.472009 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.475590 4693 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-d86dw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.475666 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" podUID="61620225-2125-49da-94f6-f6ef9dd7e6ce" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.477634 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bwgbj" event={"ID":"19df2367-f186-4892-83e6-bee3c8177dc2","Type":"ContainerStarted","Data":"799384a164e59d5664b33f9bc42a8d42f0a993d0cf19329066a85c7490b31d47"} Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.480677 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-spsbw" event={"ID":"d64fcdf8-100d-4628-beb2-126a10b8f71c","Type":"ContainerStarted","Data":"6b5e7403c29366d92b9b0906068de8d077266c301cea247a69da1d4710d29911"} Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.495809 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bz9v2" event={"ID":"586d0874-ebe1-41db-b596-1dfed12b2b94","Type":"ContainerStarted","Data":"2b10af32c4007ec740d0c722da823039b3b673d3ff0fbd15170a76190caaf984"} Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.496760 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bz9v2" Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.497996 4693 patch_prober.go:28] interesting pod/downloads-7954f5f757-bz9v2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.498034 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bz9v2" podUID="586d0874-ebe1-41db-b596-1dfed12b2b94" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.504187 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxqqx" event={"ID":"38f31f08-dc12-4feb-8567-ab19705f0e16","Type":"ContainerStarted","Data":"1b3508f885540a7c87e5b8b694f0920102dfcfd1f3301608c6e82f47cdced96f"} Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.506106 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hr46r" event={"ID":"44b1f98a-591c-49e4-9d2a-a0130f336528","Type":"ContainerStarted","Data":"478233e7659705aeae567cb020e42a7d5c8a7f8179c59f7546da9cdca73d6010"} Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.508136 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fcvw7" event={"ID":"9675a84b-88dc-4a3c-8fe9-070088ada9b1","Type":"ContainerStarted","Data":"65bcbab48eba67449f17810f80d12f65598b3df59c19c5db08d806aecbd0731c"} Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.508167 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fcvw7" event={"ID":"9675a84b-88dc-4a3c-8fe9-070088ada9b1","Type":"ContainerStarted","Data":"03a2db21dc75f06b0ff61868636f6a36e126664206ccc01bf9b17183c8e160ea"} Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.510673 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pqr7h" event={"ID":"80dd1d93-b2bd-4fad-b199-aa072c2c8216","Type":"ContainerStarted","Data":"f8883070a26c0524736aa77064ca45db033e6d73dd7f6e7f1f7a515d985c6036"} Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.510729 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pqr7h" event={"ID":"80dd1d93-b2bd-4fad-b199-aa072c2c8216","Type":"ContainerStarted","Data":"42d85b584929503ee4c7a18292806c79b04a72d5d734faa719afce016b40ddcb"} Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.537834 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:52 crc kubenswrapper[4693]: E1212 15:48:52.538352 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:53.03833552 +0000 UTC m=+160.206975121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.600553 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-spsbw" podStartSLOduration=137.600535652 podStartE2EDuration="2m17.600535652s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:52.600206583 +0000 UTC m=+159.768846184" watchObservedRunningTime="2025-12-12 15:48:52.600535652 +0000 UTC m=+159.769175253" Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.601840 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" podStartSLOduration=137.601832915 podStartE2EDuration="2m17.601832915s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:52.536062212 +0000 UTC m=+159.704701813" watchObservedRunningTime="2025-12-12 15:48:52.601832915 +0000 UTC m=+159.770472526" Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.643508 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:52 crc kubenswrapper[4693]: E1212 15:48:52.645406 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:53.145387899 +0000 UTC m=+160.314027490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.651895 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl" Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.688238 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" podStartSLOduration=137.688219465 podStartE2EDuration="2m17.688219465s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:52.643185173 +0000 UTC m=+159.811824774" watchObservedRunningTime="2025-12-12 15:48:52.688219465 +0000 UTC m=+159.856859066" Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.689785 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-xqnqh" podStartSLOduration=138.689774465 podStartE2EDuration="2m18.689774465s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:52.687716842 +0000 UTC m=+159.856356443" watchObservedRunningTime="2025-12-12 15:48:52.689774465 +0000 UTC m=+159.858414086" Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.752859 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:52 crc kubenswrapper[4693]: E1212 15:48:52.753545 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:53.253530656 +0000 UTC m=+160.422170257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.789654 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hr46r" podStartSLOduration=137.78963599 podStartE2EDuration="2m17.78963599s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:52.787722711 +0000 UTC m=+159.956362332" watchObservedRunningTime="2025-12-12 15:48:52.78963599 +0000 UTC m=+159.958275601" Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.791098 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" podStartSLOduration=137.791085557 podStartE2EDuration="2m17.791085557s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:52.75172129 +0000 UTC m=+159.920360881" watchObservedRunningTime="2025-12-12 15:48:52.791085557 +0000 UTC m=+159.959725168" Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.833316 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fcvw7" podStartSLOduration=138.833298767 podStartE2EDuration="2m18.833298767s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:52.83185703 +0000 UTC m=+160.000496631" watchObservedRunningTime="2025-12-12 15:48:52.833298767 +0000 UTC m=+160.001938368" Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.858115 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:52 crc kubenswrapper[4693]: E1212 15:48:52.858585 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:53.358566473 +0000 UTC m=+160.527206074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.923056 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pxqqx" podStartSLOduration=137.923042183 podStartE2EDuration="2m17.923042183s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:52.922721154 +0000 UTC m=+160.091360755" watchObservedRunningTime="2025-12-12 15:48:52.923042183 +0000 UTC m=+160.091681784" Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.923578 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-bz9v2" podStartSLOduration=138.923573466 podStartE2EDuration="2m18.923573466s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:52.877645291 +0000 UTC m=+160.046284892" watchObservedRunningTime="2025-12-12 15:48:52.923573466 +0000 UTC m=+160.092213067" Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.958551 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:48:52 crc kubenswrapper[4693]: [-]has-synced failed: reason withheld Dec 12 15:48:52 crc kubenswrapper[4693]: [+]process-running ok Dec 12 15:48:52 crc kubenswrapper[4693]: healthz check failed Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.958601 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.959632 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:52 crc kubenswrapper[4693]: E1212 15:48:52.959911 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:53.459901746 +0000 UTC m=+160.628541337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:52 crc kubenswrapper[4693]: I1212 15:48:52.964862 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bwgbj" podStartSLOduration=137.964842992 podStartE2EDuration="2m17.964842992s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:52.959030883 +0000 UTC m=+160.127670484" watchObservedRunningTime="2025-12-12 15:48:52.964842992 +0000 UTC m=+160.133482593" Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.013364 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pqr7h" podStartSLOduration=138.013346733 podStartE2EDuration="2m18.013346733s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:53.011036804 +0000 UTC m=+160.179676405" watchObservedRunningTime="2025-12-12 15:48:53.013346733 +0000 UTC m=+160.181986334" Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.060137 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:53 crc kubenswrapper[4693]: E1212 15:48:53.060350 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:53.560333825 +0000 UTC m=+160.728973426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.157329 4693 patch_prober.go:28] interesting pod/apiserver-76f77b778f-d28jp container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 12 15:48:53 crc kubenswrapper[4693]: [+]log ok Dec 12 15:48:53 crc kubenswrapper[4693]: [+]etcd ok Dec 12 15:48:53 crc kubenswrapper[4693]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 12 15:48:53 crc kubenswrapper[4693]: [+]poststarthook/generic-apiserver-start-informers ok Dec 12 15:48:53 crc kubenswrapper[4693]: [+]poststarthook/max-in-flight-filter ok Dec 12 15:48:53 crc kubenswrapper[4693]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 12 15:48:53 crc kubenswrapper[4693]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 12 15:48:53 crc kubenswrapper[4693]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 12 15:48:53 crc kubenswrapper[4693]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 12 15:48:53 crc kubenswrapper[4693]: [+]poststarthook/project.openshift.io-projectcache ok Dec 12 15:48:53 crc kubenswrapper[4693]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 12 15:48:53 crc kubenswrapper[4693]: [+]poststarthook/openshift.io-startinformers ok Dec 12 15:48:53 crc kubenswrapper[4693]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 12 15:48:53 crc kubenswrapper[4693]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 12 15:48:53 crc kubenswrapper[4693]: livez check failed Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.157419 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-d28jp" podUID="31b7f38e-5f91-43bf-bba4-bc8592747704" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.160653 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:53 crc kubenswrapper[4693]: E1212 15:48:53.160920 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:53.660909538 +0000 UTC m=+160.829549139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.242094 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.262351 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:53 crc kubenswrapper[4693]: E1212 15:48:53.262473 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:53.762454456 +0000 UTC m=+160.931094057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.262518 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:53 crc kubenswrapper[4693]: E1212 15:48:53.262825 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:53.762817315 +0000 UTC m=+160.931456916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.366454 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:53 crc kubenswrapper[4693]: E1212 15:48:53.366790 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:53.866775795 +0000 UTC m=+161.035415396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.368022 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:53 crc kubenswrapper[4693]: E1212 15:48:53.368533 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:53.86852404 +0000 UTC m=+161.037163641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.468951 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:53 crc kubenswrapper[4693]: E1212 15:48:53.469174 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:53.969159083 +0000 UTC m=+161.137798684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.521585 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qp87x" event={"ID":"b59eec2f-4046-439e-a4c3-1201ccdd8cd5","Type":"ContainerStarted","Data":"5f353bb451b1af67c8eaf0bf358bb4025c7ad6cef5ce663db71501c26b14242f"} Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.525533 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wd8bp" event={"ID":"2872c20a-d73b-43e0-a4c4-dc6238f5d60b","Type":"ContainerStarted","Data":"413e7c4dc3ec4bd9ea72019bfe029e0776503cf09226cfd9ce937ab70e9a9d3b"} Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.526236 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wd8bp" Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.533193 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" event={"ID":"ef8bea0e-6f25-4c4d-a294-f246fbff9926","Type":"ContainerStarted","Data":"1db730a5674b052fa90319923169929b8dc999548d26d484e9f807c7bc494462"} Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.534796 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pqr7h" Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.536306 4693 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-npwzs container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.536349 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" podUID="7b1c4746-f772-49d8-be11-9abc850ea7e2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.536716 4693 patch_prober.go:28] interesting pod/downloads-7954f5f757-bz9v2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.536740 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bz9v2" podUID="586d0874-ebe1-41db-b596-1dfed12b2b94" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.565883 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-qp87x" podStartSLOduration=138.565862017 podStartE2EDuration="2m18.565862017s" podCreationTimestamp="2025-12-12 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:53.562571443 +0000 UTC m=+160.731211044" watchObservedRunningTime="2025-12-12 15:48:53.565862017 +0000 UTC m=+160.734501618" Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.577509 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:53 crc kubenswrapper[4693]: E1212 15:48:53.580917 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:54.080860581 +0000 UTC m=+161.249500182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.612234 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-wd8bp" podStartSLOduration=9.612218883 podStartE2EDuration="9.612218883s" podCreationTimestamp="2025-12-12 15:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:53.611747551 +0000 UTC m=+160.780387152" watchObservedRunningTime="2025-12-12 15:48:53.612218883 +0000 UTC m=+160.780858474" Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.685048 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:53 crc kubenswrapper[4693]: E1212 15:48:53.685565 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:54.185545449 +0000 UTC m=+161.354185050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.786924 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:53 crc kubenswrapper[4693]: E1212 15:48:53.787494 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:54.287482427 +0000 UTC m=+161.456122028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.888424 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:53 crc kubenswrapper[4693]: E1212 15:48:53.888612 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 15:48:54.388583384 +0000 UTC m=+161.557222985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.915196 4693 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.941321 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fvk2k"] Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.942376 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvk2k" Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.950123 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.961766 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:48:53 crc kubenswrapper[4693]: [-]has-synced failed: reason withheld Dec 12 15:48:53 crc kubenswrapper[4693]: [+]process-running ok Dec 12 15:48:53 crc kubenswrapper[4693]: healthz check failed Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.961821 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.966469 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fvk2k"] Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.974639 4693 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-12T15:48:53.915236136Z","Handler":null,"Name":""} Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.975136 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.993125 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.993199 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a12f193b-21da-485e-a825-03f5bd5070b1-utilities\") pod \"community-operators-fvk2k\" (UID: \"a12f193b-21da-485e-a825-03f5bd5070b1\") " pod="openshift-marketplace/community-operators-fvk2k" Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.993224 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a12f193b-21da-485e-a825-03f5bd5070b1-catalog-content\") pod \"community-operators-fvk2k\" (UID: \"a12f193b-21da-485e-a825-03f5bd5070b1\") " pod="openshift-marketplace/community-operators-fvk2k" Dec 12 15:48:53 crc kubenswrapper[4693]: E1212 15:48:53.993571 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 15:48:54.493555529 +0000 UTC m=+161.662195300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47c86" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.993646 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb757\" (UniqueName: \"kubernetes.io/projected/a12f193b-21da-485e-a825-03f5bd5070b1-kube-api-access-bb757\") pod \"community-operators-fvk2k\" (UID: \"a12f193b-21da-485e-a825-03f5bd5070b1\") " pod="openshift-marketplace/community-operators-fvk2k" Dec 12 15:48:53 crc kubenswrapper[4693]: I1212 15:48:53.996723 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.000152 4693 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.000219 4693 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.098843 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.099138 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a12f193b-21da-485e-a825-03f5bd5070b1-utilities\") pod \"community-operators-fvk2k\" (UID: \"a12f193b-21da-485e-a825-03f5bd5070b1\") " pod="openshift-marketplace/community-operators-fvk2k" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.099167 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a12f193b-21da-485e-a825-03f5bd5070b1-catalog-content\") pod \"community-operators-fvk2k\" (UID: \"a12f193b-21da-485e-a825-03f5bd5070b1\") " pod="openshift-marketplace/community-operators-fvk2k" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.099217 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb757\" (UniqueName: \"kubernetes.io/projected/a12f193b-21da-485e-a825-03f5bd5070b1-kube-api-access-bb757\") pod \"community-operators-fvk2k\" (UID: \"a12f193b-21da-485e-a825-03f5bd5070b1\") " pod="openshift-marketplace/community-operators-fvk2k" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.100397 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a12f193b-21da-485e-a825-03f5bd5070b1-utilities\") pod \"community-operators-fvk2k\" (UID: \"a12f193b-21da-485e-a825-03f5bd5070b1\") " pod="openshift-marketplace/community-operators-fvk2k" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.100622 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a12f193b-21da-485e-a825-03f5bd5070b1-catalog-content\") pod \"community-operators-fvk2k\" (UID: \"a12f193b-21da-485e-a825-03f5bd5070b1\") " pod="openshift-marketplace/community-operators-fvk2k" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.119926 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.128367 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v9pf7"] Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.129455 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9pf7" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.134347 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.158051 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v9pf7"] Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.205634 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.213120 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb757\" (UniqueName: \"kubernetes.io/projected/a12f193b-21da-485e-a825-03f5bd5070b1-kube-api-access-bb757\") pod \"community-operators-fvk2k\" (UID: \"a12f193b-21da-485e-a825-03f5bd5070b1\") " pod="openshift-marketplace/community-operators-fvk2k" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.250513 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.250588 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.261743 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvk2k" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.308383 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e35b458b-b638-4684-8f5b-bcf2d0cf692f-utilities\") pod \"certified-operators-v9pf7\" (UID: \"e35b458b-b638-4684-8f5b-bcf2d0cf692f\") " pod="openshift-marketplace/certified-operators-v9pf7" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.308739 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlpgc\" (UniqueName: \"kubernetes.io/projected/e35b458b-b638-4684-8f5b-bcf2d0cf692f-kube-api-access-dlpgc\") pod \"certified-operators-v9pf7\" (UID: \"e35b458b-b638-4684-8f5b-bcf2d0cf692f\") " pod="openshift-marketplace/certified-operators-v9pf7" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.308792 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e35b458b-b638-4684-8f5b-bcf2d0cf692f-catalog-content\") pod \"certified-operators-v9pf7\" (UID: \"e35b458b-b638-4684-8f5b-bcf2d0cf692f\") " pod="openshift-marketplace/certified-operators-v9pf7" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.329169 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ctq58"] Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.331446 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ctq58" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.337054 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ctq58"] Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.371738 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47c86\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.416679 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlpgc\" (UniqueName: \"kubernetes.io/projected/e35b458b-b638-4684-8f5b-bcf2d0cf692f-kube-api-access-dlpgc\") pod \"certified-operators-v9pf7\" (UID: \"e35b458b-b638-4684-8f5b-bcf2d0cf692f\") " pod="openshift-marketplace/certified-operators-v9pf7" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.416761 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e35b458b-b638-4684-8f5b-bcf2d0cf692f-catalog-content\") pod \"certified-operators-v9pf7\" (UID: \"e35b458b-b638-4684-8f5b-bcf2d0cf692f\") " pod="openshift-marketplace/certified-operators-v9pf7" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.416788 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e35b458b-b638-4684-8f5b-bcf2d0cf692f-utilities\") pod \"certified-operators-v9pf7\" (UID: \"e35b458b-b638-4684-8f5b-bcf2d0cf692f\") " pod="openshift-marketplace/certified-operators-v9pf7" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.417512 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e35b458b-b638-4684-8f5b-bcf2d0cf692f-utilities\") pod \"certified-operators-v9pf7\" (UID: \"e35b458b-b638-4684-8f5b-bcf2d0cf692f\") " pod="openshift-marketplace/certified-operators-v9pf7" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.417600 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e35b458b-b638-4684-8f5b-bcf2d0cf692f-catalog-content\") pod \"certified-operators-v9pf7\" (UID: \"e35b458b-b638-4684-8f5b-bcf2d0cf692f\") " pod="openshift-marketplace/certified-operators-v9pf7" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.446031 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlpgc\" (UniqueName: \"kubernetes.io/projected/e35b458b-b638-4684-8f5b-bcf2d0cf692f-kube-api-access-dlpgc\") pod \"certified-operators-v9pf7\" (UID: \"e35b458b-b638-4684-8f5b-bcf2d0cf692f\") " pod="openshift-marketplace/certified-operators-v9pf7" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.461797 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9pf7" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.462754 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.520395 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p4cpj"] Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.523505 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4cpj" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.525303 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86drp\" (UniqueName: \"kubernetes.io/projected/77421421-26f5-4e9a-8857-bd1f5a9d8fa9-kube-api-access-86drp\") pod \"community-operators-ctq58\" (UID: \"77421421-26f5-4e9a-8857-bd1f5a9d8fa9\") " pod="openshift-marketplace/community-operators-ctq58" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.525389 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77421421-26f5-4e9a-8857-bd1f5a9d8fa9-utilities\") pod \"community-operators-ctq58\" (UID: \"77421421-26f5-4e9a-8857-bd1f5a9d8fa9\") " pod="openshift-marketplace/community-operators-ctq58" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.525782 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77421421-26f5-4e9a-8857-bd1f5a9d8fa9-catalog-content\") pod \"community-operators-ctq58\" (UID: \"77421421-26f5-4e9a-8857-bd1f5a9d8fa9\") " pod="openshift-marketplace/community-operators-ctq58" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.532803 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p4cpj"] Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.566486 4693 generic.go:334] "Generic (PLEG): container finished" podID="f92dec3b-e25a-4f3f-a004-e85cc51093c5" containerID="2ad4ffa31387f2ba9710686274b3a2b69a805fa8b2da63f2e2e2678012440018" exitCode=0 Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.566563 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm" event={"ID":"f92dec3b-e25a-4f3f-a004-e85cc51093c5","Type":"ContainerDied","Data":"2ad4ffa31387f2ba9710686274b3a2b69a805fa8b2da63f2e2e2678012440018"} Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.575449 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" event={"ID":"ef8bea0e-6f25-4c4d-a294-f246fbff9926","Type":"ContainerStarted","Data":"afc5e0e314d8213180da1e507bbb4cc0dde7ff5630008e4b6730efd0d324faf1"} Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.575479 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" event={"ID":"ef8bea0e-6f25-4c4d-a294-f246fbff9926","Type":"ContainerStarted","Data":"58beaf4cf4e1f62a342febedf4b299455175211c86dced80a52b9c6a047aaf95"} Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.580957 4693 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-npwzs container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.581000 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" podUID="7b1c4746-f772-49d8-be11-9abc850ea7e2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.581109 4693 patch_prober.go:28] interesting pod/downloads-7954f5f757-bz9v2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.581187 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bz9v2" podUID="586d0874-ebe1-41db-b596-1dfed12b2b94" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.627055 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" podStartSLOduration=10.627033826 podStartE2EDuration="10.627033826s" podCreationTimestamp="2025-12-12 15:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:54.624007739 +0000 UTC m=+161.792647340" watchObservedRunningTime="2025-12-12 15:48:54.627033826 +0000 UTC m=+161.795673437" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.629350 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdmc2\" (UniqueName: \"kubernetes.io/projected/38d663d8-7b9e-4685-9b27-cdf525b225af-kube-api-access-mdmc2\") pod \"certified-operators-p4cpj\" (UID: \"38d663d8-7b9e-4685-9b27-cdf525b225af\") " pod="openshift-marketplace/certified-operators-p4cpj" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.629496 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86drp\" (UniqueName: \"kubernetes.io/projected/77421421-26f5-4e9a-8857-bd1f5a9d8fa9-kube-api-access-86drp\") pod \"community-operators-ctq58\" (UID: \"77421421-26f5-4e9a-8857-bd1f5a9d8fa9\") " pod="openshift-marketplace/community-operators-ctq58" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.629578 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38d663d8-7b9e-4685-9b27-cdf525b225af-utilities\") pod \"certified-operators-p4cpj\" (UID: \"38d663d8-7b9e-4685-9b27-cdf525b225af\") " pod="openshift-marketplace/certified-operators-p4cpj" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.629643 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77421421-26f5-4e9a-8857-bd1f5a9d8fa9-utilities\") pod \"community-operators-ctq58\" (UID: \"77421421-26f5-4e9a-8857-bd1f5a9d8fa9\") " pod="openshift-marketplace/community-operators-ctq58" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.629731 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38d663d8-7b9e-4685-9b27-cdf525b225af-catalog-content\") pod \"certified-operators-p4cpj\" (UID: \"38d663d8-7b9e-4685-9b27-cdf525b225af\") " pod="openshift-marketplace/certified-operators-p4cpj" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.629764 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77421421-26f5-4e9a-8857-bd1f5a9d8fa9-catalog-content\") pod \"community-operators-ctq58\" (UID: \"77421421-26f5-4e9a-8857-bd1f5a9d8fa9\") " pod="openshift-marketplace/community-operators-ctq58" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.635050 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77421421-26f5-4e9a-8857-bd1f5a9d8fa9-utilities\") pod \"community-operators-ctq58\" (UID: \"77421421-26f5-4e9a-8857-bd1f5a9d8fa9\") " pod="openshift-marketplace/community-operators-ctq58" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.635814 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77421421-26f5-4e9a-8857-bd1f5a9d8fa9-catalog-content\") pod \"community-operators-ctq58\" (UID: \"77421421-26f5-4e9a-8857-bd1f5a9d8fa9\") " pod="openshift-marketplace/community-operators-ctq58" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.654907 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86drp\" (UniqueName: \"kubernetes.io/projected/77421421-26f5-4e9a-8857-bd1f5a9d8fa9-kube-api-access-86drp\") pod \"community-operators-ctq58\" (UID: \"77421421-26f5-4e9a-8857-bd1f5a9d8fa9\") " pod="openshift-marketplace/community-operators-ctq58" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.679053 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ctq58" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.730578 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38d663d8-7b9e-4685-9b27-cdf525b225af-utilities\") pod \"certified-operators-p4cpj\" (UID: \"38d663d8-7b9e-4685-9b27-cdf525b225af\") " pod="openshift-marketplace/certified-operators-p4cpj" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.731042 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38d663d8-7b9e-4685-9b27-cdf525b225af-catalog-content\") pod \"certified-operators-p4cpj\" (UID: \"38d663d8-7b9e-4685-9b27-cdf525b225af\") " pod="openshift-marketplace/certified-operators-p4cpj" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.731094 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdmc2\" (UniqueName: \"kubernetes.io/projected/38d663d8-7b9e-4685-9b27-cdf525b225af-kube-api-access-mdmc2\") pod \"certified-operators-p4cpj\" (UID: \"38d663d8-7b9e-4685-9b27-cdf525b225af\") " pod="openshift-marketplace/certified-operators-p4cpj" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.731637 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38d663d8-7b9e-4685-9b27-cdf525b225af-utilities\") pod \"certified-operators-p4cpj\" (UID: \"38d663d8-7b9e-4685-9b27-cdf525b225af\") " pod="openshift-marketplace/certified-operators-p4cpj" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.731820 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38d663d8-7b9e-4685-9b27-cdf525b225af-catalog-content\") pod \"certified-operators-p4cpj\" (UID: \"38d663d8-7b9e-4685-9b27-cdf525b225af\") " pod="openshift-marketplace/certified-operators-p4cpj" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.750095 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdmc2\" (UniqueName: \"kubernetes.io/projected/38d663d8-7b9e-4685-9b27-cdf525b225af-kube-api-access-mdmc2\") pod \"certified-operators-p4cpj\" (UID: \"38d663d8-7b9e-4685-9b27-cdf525b225af\") " pod="openshift-marketplace/certified-operators-p4cpj" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.822786 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fvk2k"] Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.873667 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-47c86"] Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.904702 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4cpj" Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.972448 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:48:54 crc kubenswrapper[4693]: [-]has-synced failed: reason withheld Dec 12 15:48:54 crc kubenswrapper[4693]: [+]process-running ok Dec 12 15:48:54 crc kubenswrapper[4693]: healthz check failed Dec 12 15:48:54 crc kubenswrapper[4693]: I1212 15:48:54.972500 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.031820 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ctq58"] Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.154576 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v9pf7"] Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.310915 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p4cpj"] Dec 12 15:48:55 crc kubenswrapper[4693]: W1212 15:48:55.348820 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38d663d8_7b9e_4685_9b27_cdf525b225af.slice/crio-c3b69af7709adfc187eb8fa3830046b2b702c3a183804d0380eb1b817a8ab964 WatchSource:0}: Error finding container c3b69af7709adfc187eb8fa3830046b2b702c3a183804d0380eb1b817a8ab964: Status 404 returned error can't find the container with id c3b69af7709adfc187eb8fa3830046b2b702c3a183804d0380eb1b817a8ab964 Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.373343 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.587086 4693 generic.go:334] "Generic (PLEG): container finished" podID="a12f193b-21da-485e-a825-03f5bd5070b1" containerID="7b7be798167610ad2561ec6eff87c2ed663fef45f377ee38ab65a8cbf05ffbd8" exitCode=0 Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.587140 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvk2k" event={"ID":"a12f193b-21da-485e-a825-03f5bd5070b1","Type":"ContainerDied","Data":"7b7be798167610ad2561ec6eff87c2ed663fef45f377ee38ab65a8cbf05ffbd8"} Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.587167 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvk2k" event={"ID":"a12f193b-21da-485e-a825-03f5bd5070b1","Type":"ContainerStarted","Data":"3a3bc0a85090b3a022ddbdc07d3081f20d7ec4f31dd23c9d61e9c963bb7f71cc"} Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.589427 4693 generic.go:334] "Generic (PLEG): container finished" podID="e35b458b-b638-4684-8f5b-bcf2d0cf692f" containerID="822f2009737fc33742b968d24495693962e99d5f16282120857d0389f38c2eb9" exitCode=0 Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.589461 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9pf7" event={"ID":"e35b458b-b638-4684-8f5b-bcf2d0cf692f","Type":"ContainerDied","Data":"822f2009737fc33742b968d24495693962e99d5f16282120857d0389f38c2eb9"} Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.589475 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9pf7" event={"ID":"e35b458b-b638-4684-8f5b-bcf2d0cf692f","Type":"ContainerStarted","Data":"631d8f9923934a95c73f9e5e84c6d62ee80be3f0b6e55d66134846c8e509136c"} Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.590962 4693 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.592683 4693 generic.go:334] "Generic (PLEG): container finished" podID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" containerID="6caef5a4390a4072bc15cf605024f0078524b2cc6a334bd0b5f50f1554241b7a" exitCode=0 Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.592723 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctq58" event={"ID":"77421421-26f5-4e9a-8857-bd1f5a9d8fa9","Type":"ContainerDied","Data":"6caef5a4390a4072bc15cf605024f0078524b2cc6a334bd0b5f50f1554241b7a"} Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.592740 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctq58" event={"ID":"77421421-26f5-4e9a-8857-bd1f5a9d8fa9","Type":"ContainerStarted","Data":"76ad148b5959544971c0eacc7f81a3e9011b261f9896d241eb07947ded9c6ba8"} Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.596221 4693 generic.go:334] "Generic (PLEG): container finished" podID="38d663d8-7b9e-4685-9b27-cdf525b225af" containerID="b308896f564709a5a6d8f1c0bf7725afd2ee438e13adbe94cbcb3fba4788df77" exitCode=0 Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.596297 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4cpj" event={"ID":"38d663d8-7b9e-4685-9b27-cdf525b225af","Type":"ContainerDied","Data":"b308896f564709a5a6d8f1c0bf7725afd2ee438e13adbe94cbcb3fba4788df77"} Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.596318 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4cpj" event={"ID":"38d663d8-7b9e-4685-9b27-cdf525b225af","Type":"ContainerStarted","Data":"c3b69af7709adfc187eb8fa3830046b2b702c3a183804d0380eb1b817a8ab964"} Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.602036 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-47c86" event={"ID":"f28a792f-4814-4a24-ab79-3a5b00adb25e","Type":"ContainerStarted","Data":"a73d5a8db64bf11ebef58f441837303b937e0976353efaa0b70ac52f113b6a99"} Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.602626 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-47c86" event={"ID":"f28a792f-4814-4a24-ab79-3a5b00adb25e","Type":"ContainerStarted","Data":"9e175ac2f88e84c619075a72e01b2f623d0d34699bc1e6a26a67dc75cb0e7d51"} Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.686459 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-47c86" podStartSLOduration=141.68642642 podStartE2EDuration="2m21.68642642s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:55.649456524 +0000 UTC m=+162.818096125" watchObservedRunningTime="2025-12-12 15:48:55.68642642 +0000 UTC m=+162.855066021" Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.888391 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm" Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.955090 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:48:55 crc kubenswrapper[4693]: [-]has-synced failed: reason withheld Dec 12 15:48:55 crc kubenswrapper[4693]: [+]process-running ok Dec 12 15:48:55 crc kubenswrapper[4693]: healthz check failed Dec 12 15:48:55 crc kubenswrapper[4693]: I1212 15:48:55.955177 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.054773 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f92dec3b-e25a-4f3f-a004-e85cc51093c5-config-volume\") pod \"f92dec3b-e25a-4f3f-a004-e85cc51093c5\" (UID: \"f92dec3b-e25a-4f3f-a004-e85cc51093c5\") " Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.054858 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f92dec3b-e25a-4f3f-a004-e85cc51093c5-secret-volume\") pod \"f92dec3b-e25a-4f3f-a004-e85cc51093c5\" (UID: \"f92dec3b-e25a-4f3f-a004-e85cc51093c5\") " Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.054909 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl2tz\" (UniqueName: \"kubernetes.io/projected/f92dec3b-e25a-4f3f-a004-e85cc51093c5-kube-api-access-xl2tz\") pod \"f92dec3b-e25a-4f3f-a004-e85cc51093c5\" (UID: \"f92dec3b-e25a-4f3f-a004-e85cc51093c5\") " Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.055767 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f92dec3b-e25a-4f3f-a004-e85cc51093c5-config-volume" (OuterVolumeSpecName: "config-volume") pod "f92dec3b-e25a-4f3f-a004-e85cc51093c5" (UID: "f92dec3b-e25a-4f3f-a004-e85cc51093c5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.066392 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f92dec3b-e25a-4f3f-a004-e85cc51093c5-kube-api-access-xl2tz" (OuterVolumeSpecName: "kube-api-access-xl2tz") pod "f92dec3b-e25a-4f3f-a004-e85cc51093c5" (UID: "f92dec3b-e25a-4f3f-a004-e85cc51093c5"). InnerVolumeSpecName "kube-api-access-xl2tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.066474 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f92dec3b-e25a-4f3f-a004-e85cc51093c5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f92dec3b-e25a-4f3f-a004-e85cc51093c5" (UID: "f92dec3b-e25a-4f3f-a004-e85cc51093c5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.103507 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q4lmj"] Dec 12 15:48:56 crc kubenswrapper[4693]: E1212 15:48:56.103722 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f92dec3b-e25a-4f3f-a004-e85cc51093c5" containerName="collect-profiles" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.103734 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f92dec3b-e25a-4f3f-a004-e85cc51093c5" containerName="collect-profiles" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.103815 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="f92dec3b-e25a-4f3f-a004-e85cc51093c5" containerName="collect-profiles" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.104506 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4lmj" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.109406 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.122798 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4lmj"] Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.156390 4693 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f92dec3b-e25a-4f3f-a004-e85cc51093c5-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.156530 4693 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f92dec3b-e25a-4f3f-a004-e85cc51093c5-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.156543 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xl2tz\" (UniqueName: \"kubernetes.io/projected/f92dec3b-e25a-4f3f-a004-e85cc51093c5-kube-api-access-xl2tz\") on node \"crc\" DevicePath \"\"" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.258795 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1790176a-e8f5-4490-b020-53392f0475cc-utilities\") pod \"redhat-marketplace-q4lmj\" (UID: \"1790176a-e8f5-4490-b020-53392f0475cc\") " pod="openshift-marketplace/redhat-marketplace-q4lmj" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.259308 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2z4s\" (UniqueName: \"kubernetes.io/projected/1790176a-e8f5-4490-b020-53392f0475cc-kube-api-access-s2z4s\") pod \"redhat-marketplace-q4lmj\" (UID: \"1790176a-e8f5-4490-b020-53392f0475cc\") " pod="openshift-marketplace/redhat-marketplace-q4lmj" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.260197 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.261214 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.262045 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1790176a-e8f5-4490-b020-53392f0475cc-catalog-content\") pod \"redhat-marketplace-q4lmj\" (UID: \"1790176a-e8f5-4490-b020-53392f0475cc\") " pod="openshift-marketplace/redhat-marketplace-q4lmj" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.264813 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.264813 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.270540 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.363699 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1790176a-e8f5-4490-b020-53392f0475cc-catalog-content\") pod \"redhat-marketplace-q4lmj\" (UID: \"1790176a-e8f5-4490-b020-53392f0475cc\") " pod="openshift-marketplace/redhat-marketplace-q4lmj" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.363747 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc5c8c79-3c41-4a0d-8c6e-a712439521d9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fc5c8c79-3c41-4a0d-8c6e-a712439521d9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.363778 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc5c8c79-3c41-4a0d-8c6e-a712439521d9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fc5c8c79-3c41-4a0d-8c6e-a712439521d9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.363800 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1790176a-e8f5-4490-b020-53392f0475cc-utilities\") pod \"redhat-marketplace-q4lmj\" (UID: \"1790176a-e8f5-4490-b020-53392f0475cc\") " pod="openshift-marketplace/redhat-marketplace-q4lmj" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.363819 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2z4s\" (UniqueName: \"kubernetes.io/projected/1790176a-e8f5-4490-b020-53392f0475cc-kube-api-access-s2z4s\") pod \"redhat-marketplace-q4lmj\" (UID: \"1790176a-e8f5-4490-b020-53392f0475cc\") " pod="openshift-marketplace/redhat-marketplace-q4lmj" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.364337 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1790176a-e8f5-4490-b020-53392f0475cc-catalog-content\") pod \"redhat-marketplace-q4lmj\" (UID: \"1790176a-e8f5-4490-b020-53392f0475cc\") " pod="openshift-marketplace/redhat-marketplace-q4lmj" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.364560 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1790176a-e8f5-4490-b020-53392f0475cc-utilities\") pod \"redhat-marketplace-q4lmj\" (UID: \"1790176a-e8f5-4490-b020-53392f0475cc\") " pod="openshift-marketplace/redhat-marketplace-q4lmj" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.393907 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2z4s\" (UniqueName: \"kubernetes.io/projected/1790176a-e8f5-4490-b020-53392f0475cc-kube-api-access-s2z4s\") pod \"redhat-marketplace-q4lmj\" (UID: \"1790176a-e8f5-4490-b020-53392f0475cc\") " pod="openshift-marketplace/redhat-marketplace-q4lmj" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.417967 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4lmj" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.464863 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc5c8c79-3c41-4a0d-8c6e-a712439521d9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fc5c8c79-3c41-4a0d-8c6e-a712439521d9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.465017 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc5c8c79-3c41-4a0d-8c6e-a712439521d9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fc5c8c79-3c41-4a0d-8c6e-a712439521d9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.465114 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc5c8c79-3c41-4a0d-8c6e-a712439521d9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fc5c8c79-3c41-4a0d-8c6e-a712439521d9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.482466 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc5c8c79-3c41-4a0d-8c6e-a712439521d9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fc5c8c79-3c41-4a0d-8c6e-a712439521d9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.502597 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kb99c"] Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.503972 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kb99c" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.515924 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb99c"] Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.576205 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.644758 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.644749 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm" event={"ID":"f92dec3b-e25a-4f3f-a004-e85cc51093c5","Type":"ContainerDied","Data":"62b19552fd0c1ead6c1f1b1e53acebe6500da73d831e96df13d8746f5b255625"} Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.644924 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62b19552fd0c1ead6c1f1b1e53acebe6500da73d831e96df13d8746f5b255625" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.645012 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.667457 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58415397-b1c4-41c4-abd4-518a27eda647-catalog-content\") pod \"redhat-marketplace-kb99c\" (UID: \"58415397-b1c4-41c4-abd4-518a27eda647\") " pod="openshift-marketplace/redhat-marketplace-kb99c" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.668039 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdsw9\" (UniqueName: \"kubernetes.io/projected/58415397-b1c4-41c4-abd4-518a27eda647-kube-api-access-fdsw9\") pod \"redhat-marketplace-kb99c\" (UID: \"58415397-b1c4-41c4-abd4-518a27eda647\") " pod="openshift-marketplace/redhat-marketplace-kb99c" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.668093 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58415397-b1c4-41c4-abd4-518a27eda647-utilities\") pod \"redhat-marketplace-kb99c\" (UID: \"58415397-b1c4-41c4-abd4-518a27eda647\") " pod="openshift-marketplace/redhat-marketplace-kb99c" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.734772 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4lmj"] Dec 12 15:48:56 crc kubenswrapper[4693]: W1212 15:48:56.750611 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1790176a_e8f5_4490_b020_53392f0475cc.slice/crio-54805fb427b9dae2da0b62441319bcb59f98394fe60f2e2577605981e16645a7 WatchSource:0}: Error finding container 54805fb427b9dae2da0b62441319bcb59f98394fe60f2e2577605981e16645a7: Status 404 returned error can't find the container with id 54805fb427b9dae2da0b62441319bcb59f98394fe60f2e2577605981e16645a7 Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.771653 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58415397-b1c4-41c4-abd4-518a27eda647-catalog-content\") pod \"redhat-marketplace-kb99c\" (UID: \"58415397-b1c4-41c4-abd4-518a27eda647\") " pod="openshift-marketplace/redhat-marketplace-kb99c" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.771739 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdsw9\" (UniqueName: \"kubernetes.io/projected/58415397-b1c4-41c4-abd4-518a27eda647-kube-api-access-fdsw9\") pod \"redhat-marketplace-kb99c\" (UID: \"58415397-b1c4-41c4-abd4-518a27eda647\") " pod="openshift-marketplace/redhat-marketplace-kb99c" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.771784 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58415397-b1c4-41c4-abd4-518a27eda647-utilities\") pod \"redhat-marketplace-kb99c\" (UID: \"58415397-b1c4-41c4-abd4-518a27eda647\") " pod="openshift-marketplace/redhat-marketplace-kb99c" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.773068 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58415397-b1c4-41c4-abd4-518a27eda647-catalog-content\") pod \"redhat-marketplace-kb99c\" (UID: \"58415397-b1c4-41c4-abd4-518a27eda647\") " pod="openshift-marketplace/redhat-marketplace-kb99c" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.774036 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58415397-b1c4-41c4-abd4-518a27eda647-utilities\") pod \"redhat-marketplace-kb99c\" (UID: \"58415397-b1c4-41c4-abd4-518a27eda647\") " pod="openshift-marketplace/redhat-marketplace-kb99c" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.799631 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdsw9\" (UniqueName: \"kubernetes.io/projected/58415397-b1c4-41c4-abd4-518a27eda647-kube-api-access-fdsw9\") pod \"redhat-marketplace-kb99c\" (UID: \"58415397-b1c4-41c4-abd4-518a27eda647\") " pod="openshift-marketplace/redhat-marketplace-kb99c" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.815189 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.815232 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.819047 4693 patch_prober.go:28] interesting pod/console-f9d7485db-b7rfx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.819102 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-b7rfx" podUID="c9efa1e6-826d-4d2f-8c65-5993738eb0b9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.831410 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kb99c" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.855957 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.864772 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-d28jp" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.956957 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.971206 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:48:56 crc kubenswrapper[4693]: [-]has-synced failed: reason withheld Dec 12 15:48:56 crc kubenswrapper[4693]: [+]process-running ok Dec 12 15:48:56 crc kubenswrapper[4693]: healthz check failed Dec 12 15:48:56 crc kubenswrapper[4693]: I1212 15:48:56.971267 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.102816 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.102871 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.107373 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zmcqt"] Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.108449 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmcqt" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.115905 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.120685 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zmcqt"] Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.134230 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.211940 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.293569 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cce9d41-da95-4956-bdb8-f234c2f96bac-utilities\") pod \"redhat-operators-zmcqt\" (UID: \"7cce9d41-da95-4956-bdb8-f234c2f96bac\") " pod="openshift-marketplace/redhat-operators-zmcqt" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.294015 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cce9d41-da95-4956-bdb8-f234c2f96bac-catalog-content\") pod \"redhat-operators-zmcqt\" (UID: \"7cce9d41-da95-4956-bdb8-f234c2f96bac\") " pod="openshift-marketplace/redhat-operators-zmcqt" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.294047 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj7lj\" (UniqueName: \"kubernetes.io/projected/7cce9d41-da95-4956-bdb8-f234c2f96bac-kube-api-access-gj7lj\") pod \"redhat-operators-zmcqt\" (UID: \"7cce9d41-da95-4956-bdb8-f234c2f96bac\") " pod="openshift-marketplace/redhat-operators-zmcqt" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.307894 4693 patch_prober.go:28] interesting pod/downloads-7954f5f757-bz9v2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.307969 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bz9v2" podUID="586d0874-ebe1-41db-b596-1dfed12b2b94" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.308965 4693 patch_prober.go:28] interesting pod/downloads-7954f5f757-bz9v2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.309027 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bz9v2" podUID="586d0874-ebe1-41db-b596-1dfed12b2b94" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Dec 12 15:48:57 crc kubenswrapper[4693]: W1212 15:48:57.349557 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfc5c8c79_3c41_4a0d_8c6e_a712439521d9.slice/crio-f79fcc8e6be1c8586ef80b9bfc7c60cfb10b6e70ebf85f8f3db34e48dd372a14 WatchSource:0}: Error finding container f79fcc8e6be1c8586ef80b9bfc7c60cfb10b6e70ebf85f8f3db34e48dd372a14: Status 404 returned error can't find the container with id f79fcc8e6be1c8586ef80b9bfc7c60cfb10b6e70ebf85f8f3db34e48dd372a14 Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.392843 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb99c"] Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.395041 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cce9d41-da95-4956-bdb8-f234c2f96bac-utilities\") pod \"redhat-operators-zmcqt\" (UID: \"7cce9d41-da95-4956-bdb8-f234c2f96bac\") " pod="openshift-marketplace/redhat-operators-zmcqt" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.395076 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cce9d41-da95-4956-bdb8-f234c2f96bac-catalog-content\") pod \"redhat-operators-zmcqt\" (UID: \"7cce9d41-da95-4956-bdb8-f234c2f96bac\") " pod="openshift-marketplace/redhat-operators-zmcqt" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.395100 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj7lj\" (UniqueName: \"kubernetes.io/projected/7cce9d41-da95-4956-bdb8-f234c2f96bac-kube-api-access-gj7lj\") pod \"redhat-operators-zmcqt\" (UID: \"7cce9d41-da95-4956-bdb8-f234c2f96bac\") " pod="openshift-marketplace/redhat-operators-zmcqt" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.442904 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cce9d41-da95-4956-bdb8-f234c2f96bac-utilities\") pod \"redhat-operators-zmcqt\" (UID: \"7cce9d41-da95-4956-bdb8-f234c2f96bac\") " pod="openshift-marketplace/redhat-operators-zmcqt" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.443028 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cce9d41-da95-4956-bdb8-f234c2f96bac-catalog-content\") pod \"redhat-operators-zmcqt\" (UID: \"7cce9d41-da95-4956-bdb8-f234c2f96bac\") " pod="openshift-marketplace/redhat-operators-zmcqt" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.455752 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj7lj\" (UniqueName: \"kubernetes.io/projected/7cce9d41-da95-4956-bdb8-f234c2f96bac-kube-api-access-gj7lj\") pod \"redhat-operators-zmcqt\" (UID: \"7cce9d41-da95-4956-bdb8-f234c2f96bac\") " pod="openshift-marketplace/redhat-operators-zmcqt" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.510016 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-klr5t"] Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.512578 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klr5t" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.555520 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-klr5t"] Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.672635 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.680305 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb99c" event={"ID":"58415397-b1c4-41c4-abd4-518a27eda647","Type":"ContainerStarted","Data":"a5a5de1916612650b0d5d50a79ba79ca3d62c4af01263434a9461a5232c052e9"} Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.685837 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4lmj" event={"ID":"1790176a-e8f5-4490-b020-53392f0475cc","Type":"ContainerStarted","Data":"ba8311edd6132fca55adeead29771e6074d98ee54eabcd61aa907f0e97850c9c"} Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.685907 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4lmj" event={"ID":"1790176a-e8f5-4490-b020-53392f0475cc","Type":"ContainerStarted","Data":"54805fb427b9dae2da0b62441319bcb59f98394fe60f2e2577605981e16645a7"} Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.698230 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fc5c8c79-3c41-4a0d-8c6e-a712439521d9","Type":"ContainerStarted","Data":"f79fcc8e6be1c8586ef80b9bfc7c60cfb10b6e70ebf85f8f3db34e48dd372a14"} Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.700064 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a461525-8c58-4454-b928-32dfc677061b-utilities\") pod \"redhat-operators-klr5t\" (UID: \"6a461525-8c58-4454-b928-32dfc677061b\") " pod="openshift-marketplace/redhat-operators-klr5t" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.700135 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw7vg\" (UniqueName: \"kubernetes.io/projected/6a461525-8c58-4454-b928-32dfc677061b-kube-api-access-bw7vg\") pod \"redhat-operators-klr5t\" (UID: \"6a461525-8c58-4454-b928-32dfc677061b\") " pod="openshift-marketplace/redhat-operators-klr5t" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.700181 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a461525-8c58-4454-b928-32dfc677061b-catalog-content\") pod \"redhat-operators-klr5t\" (UID: \"6a461525-8c58-4454-b928-32dfc677061b\") " pod="openshift-marketplace/redhat-operators-klr5t" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.706643 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.747127 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmcqt" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.808612 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a461525-8c58-4454-b928-32dfc677061b-utilities\") pod \"redhat-operators-klr5t\" (UID: \"6a461525-8c58-4454-b928-32dfc677061b\") " pod="openshift-marketplace/redhat-operators-klr5t" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.808720 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw7vg\" (UniqueName: \"kubernetes.io/projected/6a461525-8c58-4454-b928-32dfc677061b-kube-api-access-bw7vg\") pod \"redhat-operators-klr5t\" (UID: \"6a461525-8c58-4454-b928-32dfc677061b\") " pod="openshift-marketplace/redhat-operators-klr5t" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.808776 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a461525-8c58-4454-b928-32dfc677061b-catalog-content\") pod \"redhat-operators-klr5t\" (UID: \"6a461525-8c58-4454-b928-32dfc677061b\") " pod="openshift-marketplace/redhat-operators-klr5t" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.812948 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a461525-8c58-4454-b928-32dfc677061b-catalog-content\") pod \"redhat-operators-klr5t\" (UID: \"6a461525-8c58-4454-b928-32dfc677061b\") " pod="openshift-marketplace/redhat-operators-klr5t" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.813988 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a461525-8c58-4454-b928-32dfc677061b-utilities\") pod \"redhat-operators-klr5t\" (UID: \"6a461525-8c58-4454-b928-32dfc677061b\") " pod="openshift-marketplace/redhat-operators-klr5t" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.884548 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw7vg\" (UniqueName: \"kubernetes.io/projected/6a461525-8c58-4454-b928-32dfc677061b-kube-api-access-bw7vg\") pod \"redhat-operators-klr5t\" (UID: \"6a461525-8c58-4454-b928-32dfc677061b\") " pod="openshift-marketplace/redhat-operators-klr5t" Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.960403 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:48:57 crc kubenswrapper[4693]: [-]has-synced failed: reason withheld Dec 12 15:48:57 crc kubenswrapper[4693]: [+]process-running ok Dec 12 15:48:57 crc kubenswrapper[4693]: healthz check failed Dec 12 15:48:57 crc kubenswrapper[4693]: I1212 15:48:57.960497 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.171048 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klr5t" Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.238649 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zmcqt"] Dec 12 15:48:58 crc kubenswrapper[4693]: W1212 15:48:58.262000 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cce9d41_da95_4956_bdb8_f234c2f96bac.slice/crio-fa2d3492e64faead34dfae559ed43d14440227b69eff978eda7e49cc1aafd484 WatchSource:0}: Error finding container fa2d3492e64faead34dfae559ed43d14440227b69eff978eda7e49cc1aafd484: Status 404 returned error can't find the container with id fa2d3492e64faead34dfae559ed43d14440227b69eff978eda7e49cc1aafd484 Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.324430 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs\") pod \"network-metrics-daemon-w4zs6\" (UID: \"6ef3804b-c2b3-4645-b60f-9bc977a89f69\") " pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.388055 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ef3804b-c2b3-4645-b60f-9bc977a89f69-metrics-certs\") pod \"network-metrics-daemon-w4zs6\" (UID: \"6ef3804b-c2b3-4645-b60f-9bc977a89f69\") " pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.561136 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.570899 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.571008 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.573195 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.580328 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.664101 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.686220 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w4zs6" Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.728991 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44b7631b-dd96-4e33-9067-608542118d8b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"44b7631b-dd96-4e33-9067-608542118d8b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.729086 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44b7631b-dd96-4e33-9067-608542118d8b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"44b7631b-dd96-4e33-9067-608542118d8b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.747621 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fc5c8c79-3c41-4a0d-8c6e-a712439521d9","Type":"ContainerStarted","Data":"31c6a0e0d3c033e1a67bb51a2c1552b5199582da349bb8a54810c249232ff8fe"} Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.761152 4693 generic.go:334] "Generic (PLEG): container finished" podID="58415397-b1c4-41c4-abd4-518a27eda647" containerID="031f37acfb428dee64494a3681ae7b805b4c632630cd2ba4d6df8174adc43ae7" exitCode=0 Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.761822 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb99c" event={"ID":"58415397-b1c4-41c4-abd4-518a27eda647","Type":"ContainerDied","Data":"031f37acfb428dee64494a3681ae7b805b4c632630cd2ba4d6df8174adc43ae7"} Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.796923 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.796902216 podStartE2EDuration="2.796902216s" podCreationTimestamp="2025-12-12 15:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:48:58.783177805 +0000 UTC m=+165.951817416" watchObservedRunningTime="2025-12-12 15:48:58.796902216 +0000 UTC m=+165.965541817" Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.802897 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-klr5t"] Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.829940 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44b7631b-dd96-4e33-9067-608542118d8b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"44b7631b-dd96-4e33-9067-608542118d8b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.830370 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44b7631b-dd96-4e33-9067-608542118d8b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"44b7631b-dd96-4e33-9067-608542118d8b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.830748 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44b7631b-dd96-4e33-9067-608542118d8b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"44b7631b-dd96-4e33-9067-608542118d8b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.832115 4693 generic.go:334] "Generic (PLEG): container finished" podID="7cce9d41-da95-4956-bdb8-f234c2f96bac" containerID="bfcc60937ad359244f2cc7f93df6cab1931be45ff4ff2e4470aae77646bc5de9" exitCode=0 Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.832198 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmcqt" event={"ID":"7cce9d41-da95-4956-bdb8-f234c2f96bac","Type":"ContainerDied","Data":"bfcc60937ad359244f2cc7f93df6cab1931be45ff4ff2e4470aae77646bc5de9"} Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.832223 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmcqt" event={"ID":"7cce9d41-da95-4956-bdb8-f234c2f96bac","Type":"ContainerStarted","Data":"fa2d3492e64faead34dfae559ed43d14440227b69eff978eda7e49cc1aafd484"} Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.873109 4693 generic.go:334] "Generic (PLEG): container finished" podID="1790176a-e8f5-4490-b020-53392f0475cc" containerID="ba8311edd6132fca55adeead29771e6074d98ee54eabcd61aa907f0e97850c9c" exitCode=0 Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.873498 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4lmj" event={"ID":"1790176a-e8f5-4490-b020-53392f0475cc","Type":"ContainerDied","Data":"ba8311edd6132fca55adeead29771e6074d98ee54eabcd61aa907f0e97850c9c"} Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.873317 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44b7631b-dd96-4e33-9067-608542118d8b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"44b7631b-dd96-4e33-9067-608542118d8b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.960942 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:48:58 crc kubenswrapper[4693]: [-]has-synced failed: reason withheld Dec 12 15:48:58 crc kubenswrapper[4693]: [+]process-running ok Dec 12 15:48:58 crc kubenswrapper[4693]: healthz check failed Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.961004 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:48:58 crc kubenswrapper[4693]: I1212 15:48:58.989539 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 15:48:59 crc kubenswrapper[4693]: I1212 15:48:59.381333 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-w4zs6"] Dec 12 15:48:59 crc kubenswrapper[4693]: I1212 15:48:59.620742 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 12 15:48:59 crc kubenswrapper[4693]: W1212 15:48:59.670904 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod44b7631b_dd96_4e33_9067_608542118d8b.slice/crio-1b1c25c71c87648bd25de289d95b3d42d893ecd8d611807f5ead455836fed8bc WatchSource:0}: Error finding container 1b1c25c71c87648bd25de289d95b3d42d893ecd8d611807f5ead455836fed8bc: Status 404 returned error can't find the container with id 1b1c25c71c87648bd25de289d95b3d42d893ecd8d611807f5ead455836fed8bc Dec 12 15:48:59 crc kubenswrapper[4693]: I1212 15:48:59.932885 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"44b7631b-dd96-4e33-9067-608542118d8b","Type":"ContainerStarted","Data":"1b1c25c71c87648bd25de289d95b3d42d893ecd8d611807f5ead455836fed8bc"} Dec 12 15:48:59 crc kubenswrapper[4693]: I1212 15:48:59.936391 4693 generic.go:334] "Generic (PLEG): container finished" podID="fc5c8c79-3c41-4a0d-8c6e-a712439521d9" containerID="31c6a0e0d3c033e1a67bb51a2c1552b5199582da349bb8a54810c249232ff8fe" exitCode=0 Dec 12 15:48:59 crc kubenswrapper[4693]: I1212 15:48:59.936471 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fc5c8c79-3c41-4a0d-8c6e-a712439521d9","Type":"ContainerDied","Data":"31c6a0e0d3c033e1a67bb51a2c1552b5199582da349bb8a54810c249232ff8fe"} Dec 12 15:48:59 crc kubenswrapper[4693]: I1212 15:48:59.946318 4693 generic.go:334] "Generic (PLEG): container finished" podID="6a461525-8c58-4454-b928-32dfc677061b" containerID="761afdfa409c1959e9320f860aff9f38444f604e8348376ace34e21ece382b38" exitCode=0 Dec 12 15:48:59 crc kubenswrapper[4693]: I1212 15:48:59.946397 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klr5t" event={"ID":"6a461525-8c58-4454-b928-32dfc677061b","Type":"ContainerDied","Data":"761afdfa409c1959e9320f860aff9f38444f604e8348376ace34e21ece382b38"} Dec 12 15:48:59 crc kubenswrapper[4693]: I1212 15:48:59.946427 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klr5t" event={"ID":"6a461525-8c58-4454-b928-32dfc677061b","Type":"ContainerStarted","Data":"ee31bd8582cab08f405b8b5cffd7550217beb91dcbe54284bbfb5e3acba49d46"} Dec 12 15:48:59 crc kubenswrapper[4693]: I1212 15:48:59.966179 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:48:59 crc kubenswrapper[4693]: [-]has-synced failed: reason withheld Dec 12 15:48:59 crc kubenswrapper[4693]: [+]process-running ok Dec 12 15:48:59 crc kubenswrapper[4693]: healthz check failed Dec 12 15:48:59 crc kubenswrapper[4693]: I1212 15:48:59.966249 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:48:59 crc kubenswrapper[4693]: I1212 15:48:59.979469 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-w4zs6" event={"ID":"6ef3804b-c2b3-4645-b60f-9bc977a89f69","Type":"ContainerStarted","Data":"f2aca2ed830a5a976fffdfdf34b18000d323e51d61b6eee06c62dc7d3ef71fe3"} Dec 12 15:49:00 crc kubenswrapper[4693]: I1212 15:49:00.957179 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:49:00 crc kubenswrapper[4693]: [-]has-synced failed: reason withheld Dec 12 15:49:00 crc kubenswrapper[4693]: [+]process-running ok Dec 12 15:49:00 crc kubenswrapper[4693]: healthz check failed Dec 12 15:49:00 crc kubenswrapper[4693]: I1212 15:49:00.957629 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:49:01 crc kubenswrapper[4693]: I1212 15:49:01.013977 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-w4zs6" event={"ID":"6ef3804b-c2b3-4645-b60f-9bc977a89f69","Type":"ContainerStarted","Data":"6a1e189d72ccbac5c9c66cd4d9c0bd757cdd781d2a24e3da0e06be3aaa45e106"} Dec 12 15:49:01 crc kubenswrapper[4693]: I1212 15:49:01.026129 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"44b7631b-dd96-4e33-9067-608542118d8b","Type":"ContainerStarted","Data":"27611ade79e7ab5a07f98949aeb0de6e37d1fe737f6250bf9e54e96c3f2c3dbb"} Dec 12 15:49:01 crc kubenswrapper[4693]: I1212 15:49:01.052593 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.052570633 podStartE2EDuration="3.052570633s" podCreationTimestamp="2025-12-12 15:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:49:01.044409754 +0000 UTC m=+168.213049375" watchObservedRunningTime="2025-12-12 15:49:01.052570633 +0000 UTC m=+168.221210234" Dec 12 15:49:01 crc kubenswrapper[4693]: I1212 15:49:01.794775 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 15:49:01 crc kubenswrapper[4693]: I1212 15:49:01.913974 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc5c8c79-3c41-4a0d-8c6e-a712439521d9-kube-api-access\") pod \"fc5c8c79-3c41-4a0d-8c6e-a712439521d9\" (UID: \"fc5c8c79-3c41-4a0d-8c6e-a712439521d9\") " Dec 12 15:49:01 crc kubenswrapper[4693]: I1212 15:49:01.914344 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc5c8c79-3c41-4a0d-8c6e-a712439521d9-kubelet-dir\") pod \"fc5c8c79-3c41-4a0d-8c6e-a712439521d9\" (UID: \"fc5c8c79-3c41-4a0d-8c6e-a712439521d9\") " Dec 12 15:49:01 crc kubenswrapper[4693]: I1212 15:49:01.914713 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc5c8c79-3c41-4a0d-8c6e-a712439521d9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fc5c8c79-3c41-4a0d-8c6e-a712439521d9" (UID: "fc5c8c79-3c41-4a0d-8c6e-a712439521d9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 15:49:01 crc kubenswrapper[4693]: I1212 15:49:01.938963 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5c8c79-3c41-4a0d-8c6e-a712439521d9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fc5c8c79-3c41-4a0d-8c6e-a712439521d9" (UID: "fc5c8c79-3c41-4a0d-8c6e-a712439521d9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:49:01 crc kubenswrapper[4693]: I1212 15:49:01.955716 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:49:01 crc kubenswrapper[4693]: [-]has-synced failed: reason withheld Dec 12 15:49:01 crc kubenswrapper[4693]: [+]process-running ok Dec 12 15:49:01 crc kubenswrapper[4693]: healthz check failed Dec 12 15:49:01 crc kubenswrapper[4693]: I1212 15:49:01.955804 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:49:02 crc kubenswrapper[4693]: I1212 15:49:02.015913 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc5c8c79-3c41-4a0d-8c6e-a712439521d9-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:49:02 crc kubenswrapper[4693]: I1212 15:49:02.015946 4693 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc5c8c79-3c41-4a0d-8c6e-a712439521d9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:49:02 crc kubenswrapper[4693]: I1212 15:49:02.045943 4693 generic.go:334] "Generic (PLEG): container finished" podID="44b7631b-dd96-4e33-9067-608542118d8b" containerID="27611ade79e7ab5a07f98949aeb0de6e37d1fe737f6250bf9e54e96c3f2c3dbb" exitCode=0 Dec 12 15:49:02 crc kubenswrapper[4693]: I1212 15:49:02.046041 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"44b7631b-dd96-4e33-9067-608542118d8b","Type":"ContainerDied","Data":"27611ade79e7ab5a07f98949aeb0de6e37d1fe737f6250bf9e54e96c3f2c3dbb"} Dec 12 15:49:02 crc kubenswrapper[4693]: I1212 15:49:02.062054 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-w4zs6" event={"ID":"6ef3804b-c2b3-4645-b60f-9bc977a89f69","Type":"ContainerStarted","Data":"d34dc3d3225b050f3d028154bf6e295581283c04a49477b7c51db41de50466f4"} Dec 12 15:49:02 crc kubenswrapper[4693]: I1212 15:49:02.089511 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fc5c8c79-3c41-4a0d-8c6e-a712439521d9","Type":"ContainerDied","Data":"f79fcc8e6be1c8586ef80b9bfc7c60cfb10b6e70ebf85f8f3db34e48dd372a14"} Dec 12 15:49:02 crc kubenswrapper[4693]: I1212 15:49:02.089549 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f79fcc8e6be1c8586ef80b9bfc7c60cfb10b6e70ebf85f8f3db34e48dd372a14" Dec 12 15:49:02 crc kubenswrapper[4693]: I1212 15:49:02.089636 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 15:49:02 crc kubenswrapper[4693]: I1212 15:49:02.094114 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-w4zs6" podStartSLOduration=148.0940943 podStartE2EDuration="2m28.0940943s" podCreationTimestamp="2025-12-12 15:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:49:02.08982065 +0000 UTC m=+169.258460261" watchObservedRunningTime="2025-12-12 15:49:02.0940943 +0000 UTC m=+169.262733901" Dec 12 15:49:02 crc kubenswrapper[4693]: I1212 15:49:02.754972 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wd8bp" Dec 12 15:49:02 crc kubenswrapper[4693]: I1212 15:49:02.955205 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:49:02 crc kubenswrapper[4693]: [-]has-synced failed: reason withheld Dec 12 15:49:02 crc kubenswrapper[4693]: [+]process-running ok Dec 12 15:49:02 crc kubenswrapper[4693]: healthz check failed Dec 12 15:49:02 crc kubenswrapper[4693]: I1212 15:49:02.955290 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:49:03 crc kubenswrapper[4693]: I1212 15:49:03.637172 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 15:49:03 crc kubenswrapper[4693]: I1212 15:49:03.750126 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44b7631b-dd96-4e33-9067-608542118d8b-kube-api-access\") pod \"44b7631b-dd96-4e33-9067-608542118d8b\" (UID: \"44b7631b-dd96-4e33-9067-608542118d8b\") " Dec 12 15:49:03 crc kubenswrapper[4693]: I1212 15:49:03.750254 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44b7631b-dd96-4e33-9067-608542118d8b-kubelet-dir\") pod \"44b7631b-dd96-4e33-9067-608542118d8b\" (UID: \"44b7631b-dd96-4e33-9067-608542118d8b\") " Dec 12 15:49:03 crc kubenswrapper[4693]: I1212 15:49:03.750694 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44b7631b-dd96-4e33-9067-608542118d8b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "44b7631b-dd96-4e33-9067-608542118d8b" (UID: "44b7631b-dd96-4e33-9067-608542118d8b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 15:49:03 crc kubenswrapper[4693]: I1212 15:49:03.779476 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44b7631b-dd96-4e33-9067-608542118d8b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "44b7631b-dd96-4e33-9067-608542118d8b" (UID: "44b7631b-dd96-4e33-9067-608542118d8b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:49:03 crc kubenswrapper[4693]: I1212 15:49:03.852726 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44b7631b-dd96-4e33-9067-608542118d8b-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:49:03 crc kubenswrapper[4693]: I1212 15:49:03.852782 4693 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44b7631b-dd96-4e33-9067-608542118d8b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:49:03 crc kubenswrapper[4693]: I1212 15:49:03.956139 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:49:03 crc kubenswrapper[4693]: [-]has-synced failed: reason withheld Dec 12 15:49:03 crc kubenswrapper[4693]: [+]process-running ok Dec 12 15:49:03 crc kubenswrapper[4693]: healthz check failed Dec 12 15:49:03 crc kubenswrapper[4693]: I1212 15:49:03.956230 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:49:04 crc kubenswrapper[4693]: I1212 15:49:04.130969 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"44b7631b-dd96-4e33-9067-608542118d8b","Type":"ContainerDied","Data":"1b1c25c71c87648bd25de289d95b3d42d893ecd8d611807f5ead455836fed8bc"} Dec 12 15:49:04 crc kubenswrapper[4693]: I1212 15:49:04.131010 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b1c25c71c87648bd25de289d95b3d42d893ecd8d611807f5ead455836fed8bc" Dec 12 15:49:04 crc kubenswrapper[4693]: I1212 15:49:04.131062 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 15:49:04 crc kubenswrapper[4693]: I1212 15:49:04.955581 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:49:04 crc kubenswrapper[4693]: [-]has-synced failed: reason withheld Dec 12 15:49:04 crc kubenswrapper[4693]: [+]process-running ok Dec 12 15:49:04 crc kubenswrapper[4693]: healthz check failed Dec 12 15:49:04 crc kubenswrapper[4693]: I1212 15:49:04.956113 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:49:05 crc kubenswrapper[4693]: I1212 15:49:05.954135 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 15:49:05 crc kubenswrapper[4693]: [-]has-synced failed: reason withheld Dec 12 15:49:05 crc kubenswrapper[4693]: [+]process-running ok Dec 12 15:49:05 crc kubenswrapper[4693]: healthz check failed Dec 12 15:49:05 crc kubenswrapper[4693]: I1212 15:49:05.954381 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 15:49:06 crc kubenswrapper[4693]: E1212 15:49:06.367626 4693 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.012s" Dec 12 15:49:06 crc kubenswrapper[4693]: I1212 15:49:06.815051 4693 patch_prober.go:28] interesting pod/console-f9d7485db-b7rfx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 12 15:49:06 crc kubenswrapper[4693]: I1212 15:49:06.815525 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-b7rfx" podUID="c9efa1e6-826d-4d2f-8c65-5993738eb0b9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 12 15:49:06 crc kubenswrapper[4693]: I1212 15:49:06.955226 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 15:49:06 crc kubenswrapper[4693]: I1212 15:49:06.959048 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 15:49:07 crc kubenswrapper[4693]: I1212 15:49:07.324923 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-bz9v2" Dec 12 15:49:12 crc kubenswrapper[4693]: I1212 15:49:12.530773 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:49:12 crc kubenswrapper[4693]: I1212 15:49:12.531153 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:49:13 crc kubenswrapper[4693]: I1212 15:49:13.289463 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 15:49:14 crc kubenswrapper[4693]: I1212 15:49:14.468555 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:49:16 crc kubenswrapper[4693]: I1212 15:49:16.819870 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:49:16 crc kubenswrapper[4693]: I1212 15:49:16.823851 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:49:27 crc kubenswrapper[4693]: I1212 15:49:27.401990 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pqr7h" Dec 12 15:49:34 crc kubenswrapper[4693]: I1212 15:49:34.344364 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 12 15:49:34 crc kubenswrapper[4693]: E1212 15:49:34.345861 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44b7631b-dd96-4e33-9067-608542118d8b" containerName="pruner" Dec 12 15:49:34 crc kubenswrapper[4693]: I1212 15:49:34.345884 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="44b7631b-dd96-4e33-9067-608542118d8b" containerName="pruner" Dec 12 15:49:34 crc kubenswrapper[4693]: E1212 15:49:34.345904 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5c8c79-3c41-4a0d-8c6e-a712439521d9" containerName="pruner" Dec 12 15:49:34 crc kubenswrapper[4693]: I1212 15:49:34.345911 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5c8c79-3c41-4a0d-8c6e-a712439521d9" containerName="pruner" Dec 12 15:49:34 crc kubenswrapper[4693]: I1212 15:49:34.346026 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="44b7631b-dd96-4e33-9067-608542118d8b" containerName="pruner" Dec 12 15:49:34 crc kubenswrapper[4693]: I1212 15:49:34.346049 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc5c8c79-3c41-4a0d-8c6e-a712439521d9" containerName="pruner" Dec 12 15:49:34 crc kubenswrapper[4693]: I1212 15:49:34.346658 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 15:49:34 crc kubenswrapper[4693]: I1212 15:49:34.352508 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 12 15:49:34 crc kubenswrapper[4693]: I1212 15:49:34.354022 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 12 15:49:34 crc kubenswrapper[4693]: I1212 15:49:34.354065 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 12 15:49:34 crc kubenswrapper[4693]: I1212 15:49:34.421204 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15291be2-8094-4d00-9f1a-e113b25a15ff-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"15291be2-8094-4d00-9f1a-e113b25a15ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 15:49:34 crc kubenswrapper[4693]: I1212 15:49:34.421557 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/15291be2-8094-4d00-9f1a-e113b25a15ff-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"15291be2-8094-4d00-9f1a-e113b25a15ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 15:49:34 crc kubenswrapper[4693]: I1212 15:49:34.523202 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15291be2-8094-4d00-9f1a-e113b25a15ff-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"15291be2-8094-4d00-9f1a-e113b25a15ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 15:49:34 crc kubenswrapper[4693]: I1212 15:49:34.523326 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/15291be2-8094-4d00-9f1a-e113b25a15ff-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"15291be2-8094-4d00-9f1a-e113b25a15ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 15:49:34 crc kubenswrapper[4693]: I1212 15:49:34.523448 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/15291be2-8094-4d00-9f1a-e113b25a15ff-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"15291be2-8094-4d00-9f1a-e113b25a15ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 15:49:34 crc kubenswrapper[4693]: I1212 15:49:34.544255 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15291be2-8094-4d00-9f1a-e113b25a15ff-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"15291be2-8094-4d00-9f1a-e113b25a15ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 15:49:34 crc kubenswrapper[4693]: I1212 15:49:34.667540 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 15:49:38 crc kubenswrapper[4693]: I1212 15:49:38.741043 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 12 15:49:38 crc kubenswrapper[4693]: I1212 15:49:38.742901 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 12 15:49:38 crc kubenswrapper[4693]: I1212 15:49:38.761591 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 12 15:49:38 crc kubenswrapper[4693]: I1212 15:49:38.896090 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffb66d17-57a0-4b72-803c-0daec41c6e72-kube-api-access\") pod \"installer-9-crc\" (UID: \"ffb66d17-57a0-4b72-803c-0daec41c6e72\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 15:49:38 crc kubenswrapper[4693]: I1212 15:49:38.896143 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ffb66d17-57a0-4b72-803c-0daec41c6e72-var-lock\") pod \"installer-9-crc\" (UID: \"ffb66d17-57a0-4b72-803c-0daec41c6e72\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 15:49:38 crc kubenswrapper[4693]: I1212 15:49:38.896174 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffb66d17-57a0-4b72-803c-0daec41c6e72-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ffb66d17-57a0-4b72-803c-0daec41c6e72\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 15:49:38 crc kubenswrapper[4693]: I1212 15:49:38.998206 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffb66d17-57a0-4b72-803c-0daec41c6e72-kube-api-access\") pod \"installer-9-crc\" (UID: \"ffb66d17-57a0-4b72-803c-0daec41c6e72\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 15:49:38 crc kubenswrapper[4693]: I1212 15:49:38.998317 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ffb66d17-57a0-4b72-803c-0daec41c6e72-var-lock\") pod \"installer-9-crc\" (UID: \"ffb66d17-57a0-4b72-803c-0daec41c6e72\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 15:49:38 crc kubenswrapper[4693]: I1212 15:49:38.998452 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffb66d17-57a0-4b72-803c-0daec41c6e72-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ffb66d17-57a0-4b72-803c-0daec41c6e72\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 15:49:38 crc kubenswrapper[4693]: I1212 15:49:38.998476 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ffb66d17-57a0-4b72-803c-0daec41c6e72-var-lock\") pod \"installer-9-crc\" (UID: \"ffb66d17-57a0-4b72-803c-0daec41c6e72\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 15:49:38 crc kubenswrapper[4693]: I1212 15:49:38.998597 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffb66d17-57a0-4b72-803c-0daec41c6e72-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ffb66d17-57a0-4b72-803c-0daec41c6e72\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 15:49:39 crc kubenswrapper[4693]: I1212 15:49:39.021770 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffb66d17-57a0-4b72-803c-0daec41c6e72-kube-api-access\") pod \"installer-9-crc\" (UID: \"ffb66d17-57a0-4b72-803c-0daec41c6e72\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 15:49:39 crc kubenswrapper[4693]: I1212 15:49:39.115054 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 12 15:49:41 crc kubenswrapper[4693]: E1212 15:49:41.183065 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 12 15:49:41 crc kubenswrapper[4693]: E1212 15:49:41.183877 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdmc2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-p4cpj_openshift-marketplace(38d663d8-7b9e-4685-9b27-cdf525b225af): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 15:49:41 crc kubenswrapper[4693]: E1212 15:49:41.185092 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-p4cpj" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" Dec 12 15:49:41 crc kubenswrapper[4693]: E1212 15:49:41.283368 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 12 15:49:41 crc kubenswrapper[4693]: E1212 15:49:41.283548 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dlpgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-v9pf7_openshift-marketplace(e35b458b-b638-4684-8f5b-bcf2d0cf692f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 15:49:41 crc kubenswrapper[4693]: E1212 15:49:41.284710 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-v9pf7" podUID="e35b458b-b638-4684-8f5b-bcf2d0cf692f" Dec 12 15:49:42 crc kubenswrapper[4693]: I1212 15:49:42.530306 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:49:42 crc kubenswrapper[4693]: I1212 15:49:42.530685 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:49:42 crc kubenswrapper[4693]: I1212 15:49:42.530737 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 15:49:42 crc kubenswrapper[4693]: I1212 15:49:42.531395 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28"} pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 15:49:42 crc kubenswrapper[4693]: I1212 15:49:42.531508 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" containerID="cri-o://37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28" gracePeriod=600 Dec 12 15:49:42 crc kubenswrapper[4693]: E1212 15:49:42.633319 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-p4cpj" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" Dec 12 15:49:42 crc kubenswrapper[4693]: E1212 15:49:42.633378 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-v9pf7" podUID="e35b458b-b638-4684-8f5b-bcf2d0cf692f" Dec 12 15:49:42 crc kubenswrapper[4693]: E1212 15:49:42.707806 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 12 15:49:42 crc kubenswrapper[4693]: E1212 15:49:42.708012 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdsw9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kb99c_openshift-marketplace(58415397-b1c4-41c4-abd4-518a27eda647): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 15:49:42 crc kubenswrapper[4693]: E1212 15:49:42.709478 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-kb99c" podUID="58415397-b1c4-41c4-abd4-518a27eda647" Dec 12 15:49:43 crc kubenswrapper[4693]: I1212 15:49:43.507128 4693 generic.go:334] "Generic (PLEG): container finished" podID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerID="37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28" exitCode=0 Dec 12 15:49:43 crc kubenswrapper[4693]: I1212 15:49:43.507180 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerDied","Data":"37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28"} Dec 12 15:49:44 crc kubenswrapper[4693]: E1212 15:49:44.133958 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kb99c" podUID="58415397-b1c4-41c4-abd4-518a27eda647" Dec 12 15:49:44 crc kubenswrapper[4693]: E1212 15:49:44.241706 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 12 15:49:44 crc kubenswrapper[4693]: E1212 15:49:44.241860 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-86drp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ctq58_openshift-marketplace(77421421-26f5-4e9a-8857-bd1f5a9d8fa9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 15:49:44 crc kubenswrapper[4693]: E1212 15:49:44.243075 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-ctq58" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" Dec 12 15:49:44 crc kubenswrapper[4693]: E1212 15:49:44.302175 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 12 15:49:44 crc kubenswrapper[4693]: E1212 15:49:44.302355 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bb757,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-fvk2k_openshift-marketplace(a12f193b-21da-485e-a825-03f5bd5070b1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 15:49:44 crc kubenswrapper[4693]: E1212 15:49:44.303589 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-fvk2k" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" Dec 12 15:49:47 crc kubenswrapper[4693]: E1212 15:49:47.164024 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-fvk2k" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" Dec 12 15:49:47 crc kubenswrapper[4693]: E1212 15:49:47.164024 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ctq58" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" Dec 12 15:49:47 crc kubenswrapper[4693]: E1212 15:49:47.252470 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 12 15:49:47 crc kubenswrapper[4693]: E1212 15:49:47.252903 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bw7vg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-klr5t_openshift-marketplace(6a461525-8c58-4454-b928-32dfc677061b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 15:49:47 crc kubenswrapper[4693]: E1212 15:49:47.254429 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-klr5t" podUID="6a461525-8c58-4454-b928-32dfc677061b" Dec 12 15:49:47 crc kubenswrapper[4693]: E1212 15:49:47.284354 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 12 15:49:47 crc kubenswrapper[4693]: E1212 15:49:47.284931 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gj7lj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zmcqt_openshift-marketplace(7cce9d41-da95-4956-bdb8-f234c2f96bac): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 15:49:47 crc kubenswrapper[4693]: E1212 15:49:47.286134 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zmcqt" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" Dec 12 15:49:47 crc kubenswrapper[4693]: E1212 15:49:47.326661 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 12 15:49:47 crc kubenswrapper[4693]: E1212 15:49:47.326836 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2z4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-q4lmj_openshift-marketplace(1790176a-e8f5-4490-b020-53392f0475cc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 15:49:47 crc kubenswrapper[4693]: E1212 15:49:47.328068 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-q4lmj" podUID="1790176a-e8f5-4490-b020-53392f0475cc" Dec 12 15:49:47 crc kubenswrapper[4693]: I1212 15:49:47.414151 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 12 15:49:47 crc kubenswrapper[4693]: I1212 15:49:47.447690 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 12 15:49:47 crc kubenswrapper[4693]: I1212 15:49:47.527237 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ffb66d17-57a0-4b72-803c-0daec41c6e72","Type":"ContainerStarted","Data":"579961c8416b9dc31c5182702c5c08f56a81c43079a8aa0636cb148a563c82c8"} Dec 12 15:49:47 crc kubenswrapper[4693]: I1212 15:49:47.528582 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"15291be2-8094-4d00-9f1a-e113b25a15ff","Type":"ContainerStarted","Data":"6ab20952572401bfcea98af63d53507c95f2041ad009490932908cc2ac942929"} Dec 12 15:49:47 crc kubenswrapper[4693]: I1212 15:49:47.531386 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerStarted","Data":"5ff91bd354fd1b1d52f5914f816ce98932ace1f4aced9a2d721aa0982cc50f10"} Dec 12 15:49:47 crc kubenswrapper[4693]: E1212 15:49:47.533385 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-q4lmj" podUID="1790176a-e8f5-4490-b020-53392f0475cc" Dec 12 15:49:47 crc kubenswrapper[4693]: E1212 15:49:47.534800 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zmcqt" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" Dec 12 15:49:47 crc kubenswrapper[4693]: E1212 15:49:47.537991 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-klr5t" podUID="6a461525-8c58-4454-b928-32dfc677061b" Dec 12 15:49:48 crc kubenswrapper[4693]: I1212 15:49:48.537401 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ffb66d17-57a0-4b72-803c-0daec41c6e72","Type":"ContainerStarted","Data":"847f24dbaf838c7528f3858f775cc3c624e8448b1e49734d48d2bfe21971ea1d"} Dec 12 15:49:48 crc kubenswrapper[4693]: I1212 15:49:48.538965 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"15291be2-8094-4d00-9f1a-e113b25a15ff","Type":"ContainerStarted","Data":"caa8594ff896dbb31c828176338a3b948ef787fe53f58931ee63af3d0ba3ae05"} Dec 12 15:49:48 crc kubenswrapper[4693]: I1212 15:49:48.554856 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=10.554838256 podStartE2EDuration="10.554838256s" podCreationTimestamp="2025-12-12 15:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:49:48.550613463 +0000 UTC m=+215.719253064" watchObservedRunningTime="2025-12-12 15:49:48.554838256 +0000 UTC m=+215.723477857" Dec 12 15:49:48 crc kubenswrapper[4693]: I1212 15:49:48.576296 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=14.576260819 podStartE2EDuration="14.576260819s" podCreationTimestamp="2025-12-12 15:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:49:48.568243125 +0000 UTC m=+215.736882746" watchObservedRunningTime="2025-12-12 15:49:48.576260819 +0000 UTC m=+215.744900420" Dec 12 15:49:49 crc kubenswrapper[4693]: I1212 15:49:49.545914 4693 generic.go:334] "Generic (PLEG): container finished" podID="15291be2-8094-4d00-9f1a-e113b25a15ff" containerID="caa8594ff896dbb31c828176338a3b948ef787fe53f58931ee63af3d0ba3ae05" exitCode=0 Dec 12 15:49:49 crc kubenswrapper[4693]: I1212 15:49:49.546034 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"15291be2-8094-4d00-9f1a-e113b25a15ff","Type":"ContainerDied","Data":"caa8594ff896dbb31c828176338a3b948ef787fe53f58931ee63af3d0ba3ae05"} Dec 12 15:49:50 crc kubenswrapper[4693]: I1212 15:49:50.603066 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wm8d5"] Dec 12 15:49:50 crc kubenswrapper[4693]: I1212 15:49:50.859535 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 15:49:51 crc kubenswrapper[4693]: I1212 15:49:51.014996 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15291be2-8094-4d00-9f1a-e113b25a15ff-kube-api-access\") pod \"15291be2-8094-4d00-9f1a-e113b25a15ff\" (UID: \"15291be2-8094-4d00-9f1a-e113b25a15ff\") " Dec 12 15:49:51 crc kubenswrapper[4693]: I1212 15:49:51.015050 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/15291be2-8094-4d00-9f1a-e113b25a15ff-kubelet-dir\") pod \"15291be2-8094-4d00-9f1a-e113b25a15ff\" (UID: \"15291be2-8094-4d00-9f1a-e113b25a15ff\") " Dec 12 15:49:51 crc kubenswrapper[4693]: I1212 15:49:51.015135 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15291be2-8094-4d00-9f1a-e113b25a15ff-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "15291be2-8094-4d00-9f1a-e113b25a15ff" (UID: "15291be2-8094-4d00-9f1a-e113b25a15ff"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 15:49:51 crc kubenswrapper[4693]: I1212 15:49:51.015563 4693 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/15291be2-8094-4d00-9f1a-e113b25a15ff-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:49:51 crc kubenswrapper[4693]: I1212 15:49:51.024082 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15291be2-8094-4d00-9f1a-e113b25a15ff-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "15291be2-8094-4d00-9f1a-e113b25a15ff" (UID: "15291be2-8094-4d00-9f1a-e113b25a15ff"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:49:51 crc kubenswrapper[4693]: I1212 15:49:51.116958 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15291be2-8094-4d00-9f1a-e113b25a15ff-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:49:51 crc kubenswrapper[4693]: I1212 15:49:51.558691 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"15291be2-8094-4d00-9f1a-e113b25a15ff","Type":"ContainerDied","Data":"6ab20952572401bfcea98af63d53507c95f2041ad009490932908cc2ac942929"} Dec 12 15:49:51 crc kubenswrapper[4693]: I1212 15:49:51.558968 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ab20952572401bfcea98af63d53507c95f2041ad009490932908cc2ac942929" Dec 12 15:49:51 crc kubenswrapper[4693]: I1212 15:49:51.558746 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 15:49:58 crc kubenswrapper[4693]: I1212 15:49:58.595409 4693 generic.go:334] "Generic (PLEG): container finished" podID="38d663d8-7b9e-4685-9b27-cdf525b225af" containerID="d18a369926247758c7b0e338b15f09faefa219778141face02f9b517a29874e0" exitCode=0 Dec 12 15:49:58 crc kubenswrapper[4693]: I1212 15:49:58.595543 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4cpj" event={"ID":"38d663d8-7b9e-4685-9b27-cdf525b225af","Type":"ContainerDied","Data":"d18a369926247758c7b0e338b15f09faefa219778141face02f9b517a29874e0"} Dec 12 15:49:58 crc kubenswrapper[4693]: I1212 15:49:58.601538 4693 generic.go:334] "Generic (PLEG): container finished" podID="e35b458b-b638-4684-8f5b-bcf2d0cf692f" containerID="98fa9939f95bbc3e9eb316d7f5c3cb9254b0c8caf448502ce9b0f7099e4151de" exitCode=0 Dec 12 15:49:58 crc kubenswrapper[4693]: I1212 15:49:58.601578 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9pf7" event={"ID":"e35b458b-b638-4684-8f5b-bcf2d0cf692f","Type":"ContainerDied","Data":"98fa9939f95bbc3e9eb316d7f5c3cb9254b0c8caf448502ce9b0f7099e4151de"} Dec 12 15:50:01 crc kubenswrapper[4693]: I1212 15:50:01.628515 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9pf7" event={"ID":"e35b458b-b638-4684-8f5b-bcf2d0cf692f","Type":"ContainerStarted","Data":"5fb0864b6220c1237c6b82b3ef2a2326262cfc24254f5127241e85a9be74fc05"} Dec 12 15:50:01 crc kubenswrapper[4693]: I1212 15:50:01.631624 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4cpj" event={"ID":"38d663d8-7b9e-4685-9b27-cdf525b225af","Type":"ContainerStarted","Data":"e1b4933f37406b08db2d5b443b320385ff3b2d1fe6e7ba03eb9b00f53a498895"} Dec 12 15:50:01 crc kubenswrapper[4693]: I1212 15:50:01.633835 4693 generic.go:334] "Generic (PLEG): container finished" podID="58415397-b1c4-41c4-abd4-518a27eda647" containerID="1bc47c5c4fdac778a460a07334cd383766f17901c74c9d8f4ff97665c3709e74" exitCode=0 Dec 12 15:50:01 crc kubenswrapper[4693]: I1212 15:50:01.633894 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb99c" event={"ID":"58415397-b1c4-41c4-abd4-518a27eda647","Type":"ContainerDied","Data":"1bc47c5c4fdac778a460a07334cd383766f17901c74c9d8f4ff97665c3709e74"} Dec 12 15:50:01 crc kubenswrapper[4693]: I1212 15:50:01.636042 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmcqt" event={"ID":"7cce9d41-da95-4956-bdb8-f234c2f96bac","Type":"ContainerStarted","Data":"4bffb2ee83587f4d98be94bf88c65760f8edcfd458ff94dbb3ceca3f528cfdee"} Dec 12 15:50:01 crc kubenswrapper[4693]: I1212 15:50:01.637831 4693 generic.go:334] "Generic (PLEG): container finished" podID="a12f193b-21da-485e-a825-03f5bd5070b1" containerID="5e0f954137266b19ffa37e81a62b858692fbf293323495ae8ea1460a06ab73bd" exitCode=0 Dec 12 15:50:01 crc kubenswrapper[4693]: I1212 15:50:01.637853 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvk2k" event={"ID":"a12f193b-21da-485e-a825-03f5bd5070b1","Type":"ContainerDied","Data":"5e0f954137266b19ffa37e81a62b858692fbf293323495ae8ea1460a06ab73bd"} Dec 12 15:50:01 crc kubenswrapper[4693]: I1212 15:50:01.661441 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v9pf7" podStartSLOduration=2.916461091 podStartE2EDuration="1m7.661423875s" podCreationTimestamp="2025-12-12 15:48:54 +0000 UTC" firstStartedPulling="2025-12-12 15:48:55.590741202 +0000 UTC m=+162.759380803" lastFinishedPulling="2025-12-12 15:50:00.335703986 +0000 UTC m=+227.504343587" observedRunningTime="2025-12-12 15:50:01.655200949 +0000 UTC m=+228.823840550" watchObservedRunningTime="2025-12-12 15:50:01.661423875 +0000 UTC m=+228.830063476" Dec 12 15:50:01 crc kubenswrapper[4693]: I1212 15:50:01.732583 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p4cpj" podStartSLOduration=3.01053207 podStartE2EDuration="1m7.732568909s" podCreationTimestamp="2025-12-12 15:48:54 +0000 UTC" firstStartedPulling="2025-12-12 15:48:55.597562816 +0000 UTC m=+162.766202417" lastFinishedPulling="2025-12-12 15:50:00.319599655 +0000 UTC m=+227.488239256" observedRunningTime="2025-12-12 15:50:01.731153591 +0000 UTC m=+228.899793202" watchObservedRunningTime="2025-12-12 15:50:01.732568909 +0000 UTC m=+228.901208510" Dec 12 15:50:03 crc kubenswrapper[4693]: I1212 15:50:03.657162 4693 generic.go:334] "Generic (PLEG): container finished" podID="7cce9d41-da95-4956-bdb8-f234c2f96bac" containerID="4bffb2ee83587f4d98be94bf88c65760f8edcfd458ff94dbb3ceca3f528cfdee" exitCode=0 Dec 12 15:50:03 crc kubenswrapper[4693]: I1212 15:50:03.657247 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmcqt" event={"ID":"7cce9d41-da95-4956-bdb8-f234c2f96bac","Type":"ContainerDied","Data":"4bffb2ee83587f4d98be94bf88c65760f8edcfd458ff94dbb3ceca3f528cfdee"} Dec 12 15:50:04 crc kubenswrapper[4693]: I1212 15:50:04.462895 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v9pf7" Dec 12 15:50:04 crc kubenswrapper[4693]: I1212 15:50:04.462946 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v9pf7" Dec 12 15:50:04 crc kubenswrapper[4693]: I1212 15:50:04.539910 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v9pf7" Dec 12 15:50:04 crc kubenswrapper[4693]: I1212 15:50:04.905925 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p4cpj" Dec 12 15:50:04 crc kubenswrapper[4693]: I1212 15:50:04.905975 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p4cpj" Dec 12 15:50:04 crc kubenswrapper[4693]: I1212 15:50:04.953023 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p4cpj" Dec 12 15:50:05 crc kubenswrapper[4693]: I1212 15:50:05.707869 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p4cpj" Dec 12 15:50:06 crc kubenswrapper[4693]: I1212 15:50:06.527838 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p4cpj"] Dec 12 15:50:07 crc kubenswrapper[4693]: I1212 15:50:07.675868 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p4cpj" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" containerName="registry-server" containerID="cri-o://e1b4933f37406b08db2d5b443b320385ff3b2d1fe6e7ba03eb9b00f53a498895" gracePeriod=2 Dec 12 15:50:10 crc kubenswrapper[4693]: I1212 15:50:10.705287 4693 generic.go:334] "Generic (PLEG): container finished" podID="38d663d8-7b9e-4685-9b27-cdf525b225af" containerID="e1b4933f37406b08db2d5b443b320385ff3b2d1fe6e7ba03eb9b00f53a498895" exitCode=0 Dec 12 15:50:10 crc kubenswrapper[4693]: I1212 15:50:10.705781 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4cpj" event={"ID":"38d663d8-7b9e-4685-9b27-cdf525b225af","Type":"ContainerDied","Data":"e1b4933f37406b08db2d5b443b320385ff3b2d1fe6e7ba03eb9b00f53a498895"} Dec 12 15:50:14 crc kubenswrapper[4693]: I1212 15:50:14.513687 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v9pf7" Dec 12 15:50:14 crc kubenswrapper[4693]: E1212 15:50:14.907812 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1b4933f37406b08db2d5b443b320385ff3b2d1fe6e7ba03eb9b00f53a498895 is running failed: container process not found" containerID="e1b4933f37406b08db2d5b443b320385ff3b2d1fe6e7ba03eb9b00f53a498895" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 15:50:14 crc kubenswrapper[4693]: E1212 15:50:14.908899 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1b4933f37406b08db2d5b443b320385ff3b2d1fe6e7ba03eb9b00f53a498895 is running failed: container process not found" containerID="e1b4933f37406b08db2d5b443b320385ff3b2d1fe6e7ba03eb9b00f53a498895" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 15:50:14 crc kubenswrapper[4693]: E1212 15:50:14.909496 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1b4933f37406b08db2d5b443b320385ff3b2d1fe6e7ba03eb9b00f53a498895 is running failed: container process not found" containerID="e1b4933f37406b08db2d5b443b320385ff3b2d1fe6e7ba03eb9b00f53a498895" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 15:50:14 crc kubenswrapper[4693]: E1212 15:50:14.909539 4693 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1b4933f37406b08db2d5b443b320385ff3b2d1fe6e7ba03eb9b00f53a498895 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-p4cpj" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" containerName="registry-server" Dec 12 15:50:15 crc kubenswrapper[4693]: I1212 15:50:15.640897 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" containerName="oauth-openshift" containerID="cri-o://0d9dedc7417633b8172baed76d019a70c67ede4b5c8c382565e85cfb9543e6ac" gracePeriod=15 Dec 12 15:50:16 crc kubenswrapper[4693]: I1212 15:50:16.878076 4693 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-wm8d5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" start-of-body= Dec 12 15:50:16 crc kubenswrapper[4693]: I1212 15:50:16.878550 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" Dec 12 15:50:21 crc kubenswrapper[4693]: I1212 15:50:21.778690 4693 generic.go:334] "Generic (PLEG): container finished" podID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" containerID="0d9dedc7417633b8172baed76d019a70c67ede4b5c8c382565e85cfb9543e6ac" exitCode=0 Dec 12 15:50:21 crc kubenswrapper[4693]: I1212 15:50:21.778786 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" event={"ID":"f743d3ca-28a7-4e25-955f-1385b9ef8c05","Type":"ContainerDied","Data":"0d9dedc7417633b8172baed76d019a70c67ede4b5c8c382565e85cfb9543e6ac"} Dec 12 15:50:24 crc kubenswrapper[4693]: E1212 15:50:24.906739 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1b4933f37406b08db2d5b443b320385ff3b2d1fe6e7ba03eb9b00f53a498895 is running failed: container process not found" containerID="e1b4933f37406b08db2d5b443b320385ff3b2d1fe6e7ba03eb9b00f53a498895" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 15:50:24 crc kubenswrapper[4693]: E1212 15:50:24.907384 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1b4933f37406b08db2d5b443b320385ff3b2d1fe6e7ba03eb9b00f53a498895 is running failed: container process not found" containerID="e1b4933f37406b08db2d5b443b320385ff3b2d1fe6e7ba03eb9b00f53a498895" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 15:50:24 crc kubenswrapper[4693]: E1212 15:50:24.908068 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1b4933f37406b08db2d5b443b320385ff3b2d1fe6e7ba03eb9b00f53a498895 is running failed: container process not found" containerID="e1b4933f37406b08db2d5b443b320385ff3b2d1fe6e7ba03eb9b00f53a498895" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 15:50:24 crc kubenswrapper[4693]: E1212 15:50:24.908249 4693 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1b4933f37406b08db2d5b443b320385ff3b2d1fe6e7ba03eb9b00f53a498895 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-p4cpj" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" containerName="registry-server" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.898099 4693 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 15:50:25 crc kubenswrapper[4693]: E1212 15:50:25.899031 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15291be2-8094-4d00-9f1a-e113b25a15ff" containerName="pruner" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.899058 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="15291be2-8094-4d00-9f1a-e113b25a15ff" containerName="pruner" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.899260 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="15291be2-8094-4d00-9f1a-e113b25a15ff" containerName="pruner" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.899739 4693 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.899913 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.900135 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784" gracePeriod=15 Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.900252 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a" gracePeriod=15 Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.900376 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe" gracePeriod=15 Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.900326 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7" gracePeriod=15 Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.900326 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270" gracePeriod=15 Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.901436 4693 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 15:50:25 crc kubenswrapper[4693]: E1212 15:50:25.901768 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.901809 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 15:50:25 crc kubenswrapper[4693]: E1212 15:50:25.902195 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.902224 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 12 15:50:25 crc kubenswrapper[4693]: E1212 15:50:25.902252 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.902270 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 12 15:50:25 crc kubenswrapper[4693]: E1212 15:50:25.902634 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.902656 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 12 15:50:25 crc kubenswrapper[4693]: E1212 15:50:25.902670 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.902682 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 12 15:50:25 crc kubenswrapper[4693]: E1212 15:50:25.902699 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.902715 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 12 15:50:25 crc kubenswrapper[4693]: E1212 15:50:25.902734 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.902745 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.902910 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.902935 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.902949 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.902970 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.902986 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.903000 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.906669 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.906746 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.906786 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.906856 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.906917 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.981572 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4cpj" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.982177 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:25 crc kubenswrapper[4693]: I1212 15:50:25.982412 4693 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:25 crc kubenswrapper[4693]: E1212 15:50:25.998135 4693 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.204:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.007682 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.007738 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.007861 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.007909 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.007917 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.007966 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.007973 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.008024 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.008079 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.008139 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.108584 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdmc2\" (UniqueName: \"kubernetes.io/projected/38d663d8-7b9e-4685-9b27-cdf525b225af-kube-api-access-mdmc2\") pod \"38d663d8-7b9e-4685-9b27-cdf525b225af\" (UID: \"38d663d8-7b9e-4685-9b27-cdf525b225af\") " Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.108710 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38d663d8-7b9e-4685-9b27-cdf525b225af-utilities\") pod \"38d663d8-7b9e-4685-9b27-cdf525b225af\" (UID: \"38d663d8-7b9e-4685-9b27-cdf525b225af\") " Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.108732 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38d663d8-7b9e-4685-9b27-cdf525b225af-catalog-content\") pod \"38d663d8-7b9e-4685-9b27-cdf525b225af\" (UID: \"38d663d8-7b9e-4685-9b27-cdf525b225af\") " Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.108883 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.109110 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.109228 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.109719 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38d663d8-7b9e-4685-9b27-cdf525b225af-utilities" (OuterVolumeSpecName: "utilities") pod "38d663d8-7b9e-4685-9b27-cdf525b225af" (UID: "38d663d8-7b9e-4685-9b27-cdf525b225af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.116153 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38d663d8-7b9e-4685-9b27-cdf525b225af-kube-api-access-mdmc2" (OuterVolumeSpecName: "kube-api-access-mdmc2") pod "38d663d8-7b9e-4685-9b27-cdf525b225af" (UID: "38d663d8-7b9e-4685-9b27-cdf525b225af"). InnerVolumeSpecName "kube-api-access-mdmc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.182177 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38d663d8-7b9e-4685-9b27-cdf525b225af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38d663d8-7b9e-4685-9b27-cdf525b225af" (UID: "38d663d8-7b9e-4685-9b27-cdf525b225af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.209897 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.209984 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.210017 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.210025 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.210071 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.210092 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdmc2\" (UniqueName: \"kubernetes.io/projected/38d663d8-7b9e-4685-9b27-cdf525b225af-kube-api-access-mdmc2\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.210120 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38d663d8-7b9e-4685-9b27-cdf525b225af-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.210138 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38d663d8-7b9e-4685-9b27-cdf525b225af-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.210202 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.299020 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.823330 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4cpj" event={"ID":"38d663d8-7b9e-4685-9b27-cdf525b225af","Type":"ContainerDied","Data":"c3b69af7709adfc187eb8fa3830046b2b702c3a183804d0380eb1b817a8ab964"} Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.823421 4693 scope.go:117] "RemoveContainer" containerID="e1b4933f37406b08db2d5b443b320385ff3b2d1fe6e7ba03eb9b00f53a498895" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.823420 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4cpj" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.824744 4693 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.825946 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.846682 4693 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.847027 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.877943 4693 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-wm8d5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" start-of-body= Dec 12 15:50:26 crc kubenswrapper[4693]: I1212 15:50:26.878018 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" Dec 12 15:50:26 crc kubenswrapper[4693]: E1212 15:50:26.878673 4693 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/events/oauth-openshift-558db77b4-wm8d5.18808291da98388c\": dial tcp 38.102.83.204:6443: connect: connection refused" event=< Dec 12 15:50:26 crc kubenswrapper[4693]: &Event{ObjectMeta:{oauth-openshift-558db77b4-wm8d5.18808291da98388c openshift-authentication 29455 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication,Name:oauth-openshift-558db77b4-wm8d5,UID:f743d3ca-28a7-4e25-955f-1385b9ef8c05,APIVersion:v1,ResourceVersion:27228,FieldPath:spec.containers{oauth-openshift},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.27:6443/healthz": dial tcp 10.217.0.27:6443: connect: connection refused Dec 12 15:50:26 crc kubenswrapper[4693]: body: Dec 12 15:50:26 crc kubenswrapper[4693]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:50:16 +0000 UTC,LastTimestamp:2025-12-12 15:50:26.877996371 +0000 UTC m=+254.046635972,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 15:50:26 crc kubenswrapper[4693]: > Dec 12 15:50:27 crc kubenswrapper[4693]: I1212 15:50:27.831859 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 12 15:50:27 crc kubenswrapper[4693]: I1212 15:50:27.836119 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 12 15:50:27 crc kubenswrapper[4693]: I1212 15:50:27.837369 4693 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270" exitCode=2 Dec 12 15:50:28 crc kubenswrapper[4693]: I1212 15:50:28.858029 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 12 15:50:28 crc kubenswrapper[4693]: I1212 15:50:28.862941 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 12 15:50:28 crc kubenswrapper[4693]: I1212 15:50:28.864322 4693 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe" exitCode=0 Dec 12 15:50:28 crc kubenswrapper[4693]: I1212 15:50:28.864361 4693 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a" exitCode=0 Dec 12 15:50:28 crc kubenswrapper[4693]: I1212 15:50:28.864372 4693 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7" exitCode=0 Dec 12 15:50:28 crc kubenswrapper[4693]: I1212 15:50:28.864381 4693 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784" exitCode=0 Dec 12 15:50:28 crc kubenswrapper[4693]: I1212 15:50:28.867151 4693 generic.go:334] "Generic (PLEG): container finished" podID="ffb66d17-57a0-4b72-803c-0daec41c6e72" containerID="847f24dbaf838c7528f3858f775cc3c624e8448b1e49734d48d2bfe21971ea1d" exitCode=0 Dec 12 15:50:28 crc kubenswrapper[4693]: I1212 15:50:28.867212 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ffb66d17-57a0-4b72-803c-0daec41c6e72","Type":"ContainerDied","Data":"847f24dbaf838c7528f3858f775cc3c624e8448b1e49734d48d2bfe21971ea1d"} Dec 12 15:50:28 crc kubenswrapper[4693]: I1212 15:50:28.868028 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:28 crc kubenswrapper[4693]: I1212 15:50:28.868399 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: E1212 15:50:29.061549 4693 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/events/oauth-openshift-558db77b4-wm8d5.18808291da98388c\": dial tcp 38.102.83.204:6443: connect: connection refused" event=< Dec 12 15:50:29 crc kubenswrapper[4693]: &Event{ObjectMeta:{oauth-openshift-558db77b4-wm8d5.18808291da98388c openshift-authentication 29455 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication,Name:oauth-openshift-558db77b4-wm8d5,UID:f743d3ca-28a7-4e25-955f-1385b9ef8c05,APIVersion:v1,ResourceVersion:27228,FieldPath:spec.containers{oauth-openshift},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.27:6443/healthz": dial tcp 10.217.0.27:6443: connect: connection refused Dec 12 15:50:29 crc kubenswrapper[4693]: body: Dec 12 15:50:29 crc kubenswrapper[4693]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:50:16 +0000 UTC,LastTimestamp:2025-12-12 15:50:26.877996371 +0000 UTC m=+254.046635972,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 15:50:29 crc kubenswrapper[4693]: > Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.450417 4693 scope.go:117] "RemoveContainer" containerID="d18a369926247758c7b0e338b15f09faefa219778141face02f9b517a29874e0" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.589749 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.590600 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.590902 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.592703 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.602374 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.602513 4693 scope.go:117] "RemoveContainer" containerID="b308896f564709a5a6d8f1c0bf7725afd2ee438e13adbe94cbcb3fba4788df77" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.604565 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.605942 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.608091 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.609007 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.609552 4693 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.610005 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.628576 4693 scope.go:117] "RemoveContainer" containerID="ec8735b6bc0c3a6967f22f1be4da6e44d2b1dfe224482ac5e13596999c1eba5e" Dec 12 15:50:29 crc kubenswrapper[4693]: W1212 15:50:29.649701 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-abe35ab812a07428fd6b052e362451aa23dc6ad4810ac2d107dcc7e9606366dd WatchSource:0}: Error finding container abe35ab812a07428fd6b052e362451aa23dc6ad4810ac2d107dcc7e9606366dd: Status 404 returned error can't find the container with id abe35ab812a07428fd6b052e362451aa23dc6ad4810ac2d107dcc7e9606366dd Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.698515 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f743d3ca-28a7-4e25-955f-1385b9ef8c05-audit-dir\") pod \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.698593 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-template-provider-selection\") pod \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.698650 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-audit-policies\") pod \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.698710 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-trusted-ca-bundle\") pod \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.698637 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f743d3ca-28a7-4e25-955f-1385b9ef8c05-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f743d3ca-28a7-4e25-955f-1385b9ef8c05" (UID: "f743d3ca-28a7-4e25-955f-1385b9ef8c05"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.698755 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-router-certs\") pod \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.698790 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-session\") pod \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.698882 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-cliconfig\") pod \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.698957 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-template-login\") pod \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.698999 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-template-error\") pod \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.699049 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-idp-0-file-data\") pod \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.699084 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-serving-cert\") pod \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.699134 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64kxm\" (UniqueName: \"kubernetes.io/projected/f743d3ca-28a7-4e25-955f-1385b9ef8c05-kube-api-access-64kxm\") pod \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.699184 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-ocp-branding-template\") pod \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.699221 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-service-ca\") pod \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\" (UID: \"f743d3ca-28a7-4e25-955f-1385b9ef8c05\") " Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.699520 4693 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f743d3ca-28a7-4e25-955f-1385b9ef8c05-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.699743 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f743d3ca-28a7-4e25-955f-1385b9ef8c05" (UID: "f743d3ca-28a7-4e25-955f-1385b9ef8c05"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.699771 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f743d3ca-28a7-4e25-955f-1385b9ef8c05" (UID: "f743d3ca-28a7-4e25-955f-1385b9ef8c05"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.700164 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f743d3ca-28a7-4e25-955f-1385b9ef8c05" (UID: "f743d3ca-28a7-4e25-955f-1385b9ef8c05"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.700192 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f743d3ca-28a7-4e25-955f-1385b9ef8c05" (UID: "f743d3ca-28a7-4e25-955f-1385b9ef8c05"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.705401 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f743d3ca-28a7-4e25-955f-1385b9ef8c05" (UID: "f743d3ca-28a7-4e25-955f-1385b9ef8c05"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.705996 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f743d3ca-28a7-4e25-955f-1385b9ef8c05" (UID: "f743d3ca-28a7-4e25-955f-1385b9ef8c05"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.707278 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f743d3ca-28a7-4e25-955f-1385b9ef8c05" (UID: "f743d3ca-28a7-4e25-955f-1385b9ef8c05"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.707618 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f743d3ca-28a7-4e25-955f-1385b9ef8c05" (UID: "f743d3ca-28a7-4e25-955f-1385b9ef8c05"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.708716 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f743d3ca-28a7-4e25-955f-1385b9ef8c05-kube-api-access-64kxm" (OuterVolumeSpecName: "kube-api-access-64kxm") pod "f743d3ca-28a7-4e25-955f-1385b9ef8c05" (UID: "f743d3ca-28a7-4e25-955f-1385b9ef8c05"). InnerVolumeSpecName "kube-api-access-64kxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.711709 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f743d3ca-28a7-4e25-955f-1385b9ef8c05" (UID: "f743d3ca-28a7-4e25-955f-1385b9ef8c05"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.712036 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f743d3ca-28a7-4e25-955f-1385b9ef8c05" (UID: "f743d3ca-28a7-4e25-955f-1385b9ef8c05"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.715093 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f743d3ca-28a7-4e25-955f-1385b9ef8c05" (UID: "f743d3ca-28a7-4e25-955f-1385b9ef8c05"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.716608 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f743d3ca-28a7-4e25-955f-1385b9ef8c05" (UID: "f743d3ca-28a7-4e25-955f-1385b9ef8c05"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.800373 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.800509 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.800549 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.800547 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.800620 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.800725 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.801627 4693 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.801670 4693 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.801686 4693 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.801705 4693 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.801718 4693 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.801730 4693 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.801742 4693 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.801754 4693 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.801769 4693 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.801784 4693 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.801797 4693 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.801813 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64kxm\" (UniqueName: \"kubernetes.io/projected/f743d3ca-28a7-4e25-955f-1385b9ef8c05-kube-api-access-64kxm\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.801828 4693 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.801842 4693 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.801856 4693 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f743d3ca-28a7-4e25-955f-1385b9ef8c05-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.801872 4693 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f743d3ca-28a7-4e25-955f-1385b9ef8c05-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.923653 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmcqt" event={"ID":"7cce9d41-da95-4956-bdb8-f234c2f96bac","Type":"ContainerStarted","Data":"5a6d9d11149436685192e96aa236eeaf60aeff9528605ea6dd85a485728d4980"} Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.924862 4693 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.925556 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.925774 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.925876 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"abe35ab812a07428fd6b052e362451aa23dc6ad4810ac2d107dcc7e9606366dd"} Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.925987 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.926256 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.928877 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4lmj" event={"ID":"1790176a-e8f5-4490-b020-53392f0475cc","Type":"ContainerStarted","Data":"c6beaeaff16c4e3569d403b9f74e046ca48fa1f56b83a0ee849688e2f5ef8513"} Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.929505 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.929796 4693 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.930270 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.930594 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.930829 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.931109 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.932211 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klr5t" event={"ID":"6a461525-8c58-4454-b928-32dfc677061b","Type":"ContainerStarted","Data":"755f303c7fae100c87c87307a41f01ac030c685925927c7b6d08fd5f866be186"} Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.933038 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.933199 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.933455 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.933783 4693 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.933945 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.934105 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.934254 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.936539 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.937093 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" event={"ID":"f743d3ca-28a7-4e25-955f-1385b9ef8c05","Type":"ContainerDied","Data":"68468d38e642ecd9302b1c4b7db986de32aea3df59ee282daba49eb7387c8979"} Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.937148 4693 scope.go:117] "RemoveContainer" containerID="0d9dedc7417633b8172baed76d019a70c67ede4b5c8c382565e85cfb9543e6ac" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.937617 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.937781 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.937940 4693 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.938099 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.938451 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.940200 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.940642 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.946965 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvk2k" event={"ID":"a12f193b-21da-485e-a825-03f5bd5070b1","Type":"ContainerStarted","Data":"e97bb28cd0d861b09997b0d258a8c2d03c90fd82ff7e726fff922138bf023191"} Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.949483 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.950054 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.950464 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.954544 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.955082 4693 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.955353 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.955944 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.956543 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.964956 4693 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.965160 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.970564 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.971320 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.971883 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.972342 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.972548 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.972664 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.972712 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.972868 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.973100 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.973255 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.973428 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.973795 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.974634 4693 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.975369 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.975803 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.976085 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.976558 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb99c" event={"ID":"58415397-b1c4-41c4-abd4-518a27eda647","Type":"ContainerStarted","Data":"1714907fe89887ef24be8084bdf19db55965350dcbbb5bb55acdbcca9ab041f0"} Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.977667 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.977835 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.978020 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.978326 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.978895 4693 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.979459 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.981460 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.982045 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.982512 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:29 crc kubenswrapper[4693]: I1212 15:50:29.990622 4693 scope.go:117] "RemoveContainer" containerID="26504fa779367b24de312badc36a16cb2904e1c6c15685fc11ee806db6cf90fe" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.001081 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.008909 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.009662 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.010006 4693 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.010156 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.010323 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.010518 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.010667 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.010804 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.024217 4693 scope.go:117] "RemoveContainer" containerID="4a5a235ee87ce069fe32e3de4ec49f5e81430fca18f0ece998451d6ebe9c8c6a" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.169174 4693 scope.go:117] "RemoveContainer" containerID="dd8dcfa03dceaf1f70a77243b68cccd4832bd2e4ee21b0f08916ee966d7ff4c7" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.253954 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.254868 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.255402 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.255990 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.256351 4693 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.256665 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.256995 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.257075 4693 scope.go:117] "RemoveContainer" containerID="662e5685891bcbf53c25a740a8f45a8c99b2b98a22bfa99aa6e766c5f5bc1270" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.257373 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.257690 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.257924 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.277073 4693 scope.go:117] "RemoveContainer" containerID="760dd7ef5a37abb086301d65b94218cbf7e06cb47a6e23b32ada3e58e77c6784" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.294996 4693 scope.go:117] "RemoveContainer" containerID="d6441b9c0ae460f37d50f008094fafc2a0f1b26a1ef673855adca11d7a5e8d02" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.411025 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ffb66d17-57a0-4b72-803c-0daec41c6e72-var-lock\") pod \"ffb66d17-57a0-4b72-803c-0daec41c6e72\" (UID: \"ffb66d17-57a0-4b72-803c-0daec41c6e72\") " Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.411140 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffb66d17-57a0-4b72-803c-0daec41c6e72-kube-api-access\") pod \"ffb66d17-57a0-4b72-803c-0daec41c6e72\" (UID: \"ffb66d17-57a0-4b72-803c-0daec41c6e72\") " Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.411185 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb66d17-57a0-4b72-803c-0daec41c6e72-var-lock" (OuterVolumeSpecName: "var-lock") pod "ffb66d17-57a0-4b72-803c-0daec41c6e72" (UID: "ffb66d17-57a0-4b72-803c-0daec41c6e72"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.411225 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffb66d17-57a0-4b72-803c-0daec41c6e72-kubelet-dir\") pod \"ffb66d17-57a0-4b72-803c-0daec41c6e72\" (UID: \"ffb66d17-57a0-4b72-803c-0daec41c6e72\") " Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.411346 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb66d17-57a0-4b72-803c-0daec41c6e72-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ffb66d17-57a0-4b72-803c-0daec41c6e72" (UID: "ffb66d17-57a0-4b72-803c-0daec41c6e72"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.411537 4693 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ffb66d17-57a0-4b72-803c-0daec41c6e72-var-lock\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.411548 4693 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffb66d17-57a0-4b72-803c-0daec41c6e72-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.417327 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb66d17-57a0-4b72-803c-0daec41c6e72-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ffb66d17-57a0-4b72-803c-0daec41c6e72" (UID: "ffb66d17-57a0-4b72-803c-0daec41c6e72"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.513103 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffb66d17-57a0-4b72-803c-0daec41c6e72-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.985576 4693 generic.go:334] "Generic (PLEG): container finished" podID="1790176a-e8f5-4490-b020-53392f0475cc" containerID="c6beaeaff16c4e3569d403b9f74e046ca48fa1f56b83a0ee849688e2f5ef8513" exitCode=0 Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.985682 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4lmj" event={"ID":"1790176a-e8f5-4490-b020-53392f0475cc","Type":"ContainerDied","Data":"c6beaeaff16c4e3569d403b9f74e046ca48fa1f56b83a0ee849688e2f5ef8513"} Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.986772 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.987070 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.987358 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.987707 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.988130 4693 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.988385 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.988579 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.988757 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.988998 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.989706 4693 generic.go:334] "Generic (PLEG): container finished" podID="6a461525-8c58-4454-b928-32dfc677061b" containerID="755f303c7fae100c87c87307a41f01ac030c685925927c7b6d08fd5f866be186" exitCode=0 Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.989766 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klr5t" event={"ID":"6a461525-8c58-4454-b928-32dfc677061b","Type":"ContainerDied","Data":"755f303c7fae100c87c87307a41f01ac030c685925927c7b6d08fd5f866be186"} Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.990545 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.990863 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.991198 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.991809 4693 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.992321 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.993299 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.996732 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.997112 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:30 crc kubenswrapper[4693]: I1212 15:50:30.997695 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.001837 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ffb66d17-57a0-4b72-803c-0daec41c6e72","Type":"ContainerDied","Data":"579961c8416b9dc31c5182702c5c08f56a81c43079a8aa0636cb148a563c82c8"} Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.001878 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="579961c8416b9dc31c5182702c5c08f56a81c43079a8aa0636cb148a563c82c8" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.001972 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.006511 4693 generic.go:334] "Generic (PLEG): container finished" podID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" containerID="7c576c29b8d074a0dea628a11853602826fa4335cbec1543b0ce6264e9a10592" exitCode=0 Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.006602 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctq58" event={"ID":"77421421-26f5-4e9a-8857-bd1f5a9d8fa9","Type":"ContainerDied","Data":"7c576c29b8d074a0dea628a11853602826fa4335cbec1543b0ce6264e9a10592"} Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.007424 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.007632 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.007815 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.008430 4693 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.008875 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"dfce17445ea735432f77f51f29e615d00e597f00c6e937e475bd626e9f418a32"} Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.008929 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: E1212 15:50:31.009851 4693 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.204:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.009867 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.010154 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.010526 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.011139 4693 status_manager.go:851] "Failed to get status for pod" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" pod="openshift-marketplace/community-operators-ctq58" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ctq58\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.011431 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.012337 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.012753 4693 status_manager.go:851] "Failed to get status for pod" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" pod="openshift-marketplace/community-operators-ctq58" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ctq58\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.013149 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.013461 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.013875 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.014371 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.014912 4693 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.015513 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.015792 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.016142 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.017555 4693 status_manager.go:851] "Failed to get status for pod" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" pod="openshift-marketplace/community-operators-ctq58" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ctq58\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.017829 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.018109 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.018520 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.019583 4693 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.019948 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.020564 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.020853 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.021098 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.021364 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: E1212 15:50:31.106784 4693 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: E1212 15:50:31.107364 4693 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: E1212 15:50:31.107705 4693 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: E1212 15:50:31.107995 4693 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: E1212 15:50:31.108502 4693 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.108591 4693 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 12 15:50:31 crc kubenswrapper[4693]: E1212 15:50:31.109155 4693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="200ms" Dec 12 15:50:31 crc kubenswrapper[4693]: E1212 15:50:31.310725 4693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="400ms" Dec 12 15:50:31 crc kubenswrapper[4693]: I1212 15:50:31.364999 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 12 15:50:31 crc kubenswrapper[4693]: E1212 15:50:31.711188 4693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="800ms" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.017362 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klr5t" event={"ID":"6a461525-8c58-4454-b928-32dfc677061b","Type":"ContainerStarted","Data":"6ad7d97a7e824c759705dd4fcb3555907b680485433424f5b18ba6fdffb4cd60"} Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.018171 4693 status_manager.go:851] "Failed to get status for pod" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" pod="openshift-marketplace/community-operators-ctq58" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ctq58\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.018538 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.018805 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.019075 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.019351 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.019453 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4lmj" event={"ID":"1790176a-e8f5-4490-b020-53392f0475cc","Type":"ContainerStarted","Data":"3524de0d28f3353e735b1679611a0661782e5f8de7f62f3c99bd35558958631e"} Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.019583 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.019789 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.020023 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.020243 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.020620 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.020991 4693 status_manager.go:851] "Failed to get status for pod" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" pod="openshift-marketplace/community-operators-ctq58" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ctq58\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.021451 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.021572 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctq58" event={"ID":"77421421-26f5-4e9a-8857-bd1f5a9d8fa9","Type":"ContainerStarted","Data":"a6e0d54ef189f3fe3850813ef14d07bb6ad944d2080e313f87c5909d94981054"} Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.021644 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.021922 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: E1212 15:50:32.022070 4693 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.204:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.022195 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.022563 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.022896 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.023254 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.023753 4693 status_manager.go:851] "Failed to get status for pod" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" pod="openshift-marketplace/community-operators-ctq58" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ctq58\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.024069 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.024443 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.024762 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.025105 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.025463 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.025757 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.026108 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: I1212 15:50:32.026441 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:32 crc kubenswrapper[4693]: E1212 15:50:32.512742 4693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="1.6s" Dec 12 15:50:33 crc kubenswrapper[4693]: I1212 15:50:33.360637 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:33 crc kubenswrapper[4693]: I1212 15:50:33.361732 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:33 crc kubenswrapper[4693]: I1212 15:50:33.362470 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:33 crc kubenswrapper[4693]: I1212 15:50:33.362817 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:33 crc kubenswrapper[4693]: I1212 15:50:33.363326 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:33 crc kubenswrapper[4693]: I1212 15:50:33.364165 4693 status_manager.go:851] "Failed to get status for pod" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" pod="openshift-marketplace/community-operators-ctq58" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ctq58\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:33 crc kubenswrapper[4693]: I1212 15:50:33.364595 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:33 crc kubenswrapper[4693]: I1212 15:50:33.365086 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:33 crc kubenswrapper[4693]: I1212 15:50:33.365413 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:34 crc kubenswrapper[4693]: E1212 15:50:34.113765 4693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="3.2s" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.262678 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fvk2k" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.262722 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fvk2k" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.310512 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fvk2k" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.311133 4693 status_manager.go:851] "Failed to get status for pod" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" pod="openshift-marketplace/community-operators-ctq58" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ctq58\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.311616 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.312197 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.312520 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.312844 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.313158 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.313558 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.313868 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.314153 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.679763 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ctq58" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.680136 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ctq58" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.727714 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ctq58" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.728463 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.728870 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.729177 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.729550 4693 status_manager.go:851] "Failed to get status for pod" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" pod="openshift-marketplace/community-operators-ctq58" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ctq58\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.729851 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.730252 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.730913 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.731244 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:34 crc kubenswrapper[4693]: I1212 15:50:34.731592 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:35 crc kubenswrapper[4693]: I1212 15:50:35.086671 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fvk2k" Dec 12 15:50:35 crc kubenswrapper[4693]: I1212 15:50:35.087597 4693 status_manager.go:851] "Failed to get status for pod" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" pod="openshift-marketplace/community-operators-ctq58" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ctq58\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:35 crc kubenswrapper[4693]: I1212 15:50:35.088078 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:35 crc kubenswrapper[4693]: I1212 15:50:35.088556 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:35 crc kubenswrapper[4693]: I1212 15:50:35.088917 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:35 crc kubenswrapper[4693]: I1212 15:50:35.089263 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:35 crc kubenswrapper[4693]: I1212 15:50:35.089614 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:35 crc kubenswrapper[4693]: I1212 15:50:35.089894 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:35 crc kubenswrapper[4693]: I1212 15:50:35.090247 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:35 crc kubenswrapper[4693]: I1212 15:50:35.090569 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: E1212 15:50:36.325441 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:50:36Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:50:36Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:50:36Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T15:50:36Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:b295ecbeb669e7fe4668a62e3b5a215e25e76f8847d56b8dded02988a94e4aba\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:f1ca2393659be90fe7121a54bcc3015ffc91c7c5830c6f71f698446b715a6ab3\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1634292050},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:2bed8b5bd559a0a263b91e1b4fd532e5fb6eea838966793dbe88a921e8e1f4d3\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:8a9146149ca95107aa49e799228087bc70c76cd0611e84df69e409e4fe727d1a\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1236215142},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:65b8b98b46c0aceca9c587fd0a0e499c213a922a96d18a525b2af1a030cdbbfa\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:68691fe8673dbab4ab7935ea188c109d038b071d4d9be2a89d2c277d6d4120d5\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1200611670},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1bd771794f1785eb9137335fe2468e49b18e63fd12105305f837af4ebbe97e2e\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c29a0091864f17621dc93217af0c0ad31d9ade4837c2e1c2161172d691818df9\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1152844048},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: E1212 15:50:36.326551 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: E1212 15:50:36.327044 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: E1212 15:50:36.327409 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: E1212 15:50:36.327794 4693 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: E1212 15:50:36.327831 4693 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.418783 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q4lmj" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.418864 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q4lmj" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.464089 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q4lmj" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.465050 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.465682 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.466244 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.466674 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.467269 4693 status_manager.go:851] "Failed to get status for pod" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" pod="openshift-marketplace/community-operators-ctq58" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ctq58\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.467709 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.468187 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.468753 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.469255 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.831752 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kb99c" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.831840 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kb99c" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.882022 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kb99c" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.882876 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.883445 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.883897 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.884335 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.884708 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.885064 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.885420 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.885772 4693 status_manager.go:851] "Failed to get status for pod" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" pod="openshift-marketplace/community-operators-ctq58" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ctq58\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:36 crc kubenswrapper[4693]: I1212 15:50:36.886535 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.100518 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q4lmj" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.101683 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.102413 4693 status_manager.go:851] "Failed to get status for pod" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" pod="openshift-marketplace/community-operators-ctq58" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ctq58\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.102892 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.103354 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.103709 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.104231 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.104844 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.105404 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.105884 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.114583 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kb99c" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.115448 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.115922 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.116607 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.117880 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.118529 4693 status_manager.go:851] "Failed to get status for pod" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" pod="openshift-marketplace/community-operators-ctq58" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ctq58\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.118942 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.119478 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.119902 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.120322 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: E1212 15:50:37.314444 4693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="6.4s" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.357262 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.357960 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.358463 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.358698 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.358852 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.358987 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.359133 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.359290 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.359425 4693 status_manager.go:851] "Failed to get status for pod" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" pod="openshift-marketplace/community-operators-ctq58" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ctq58\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.359604 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.377658 4693 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c30e8235-7ceb-42a8-86d0-a1b89dd6cf07" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.377687 4693 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c30e8235-7ceb-42a8-86d0-a1b89dd6cf07" Dec 12 15:50:37 crc kubenswrapper[4693]: E1212 15:50:37.378125 4693 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.378788 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:50:37 crc kubenswrapper[4693]: W1212 15:50:37.400641 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-06ef74b05f347890ef0e31be1213f4e436b1166b4277ec3228f33937f30ec8e9 WatchSource:0}: Error finding container 06ef74b05f347890ef0e31be1213f4e436b1166b4277ec3228f33937f30ec8e9: Status 404 returned error can't find the container with id 06ef74b05f347890ef0e31be1213f4e436b1166b4277ec3228f33937f30ec8e9 Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.747861 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zmcqt" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.747909 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zmcqt" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.790863 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zmcqt" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.791725 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.792224 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.792657 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.793022 4693 status_manager.go:851] "Failed to get status for pod" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" pod="openshift-marketplace/community-operators-ctq58" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ctq58\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.793473 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.793729 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.794009 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.794484 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:37 crc kubenswrapper[4693]: I1212 15:50:37.794890 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.063058 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"06ef74b05f347890ef0e31be1213f4e436b1166b4277ec3228f33937f30ec8e9"} Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.098595 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zmcqt" Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.099252 4693 status_manager.go:851] "Failed to get status for pod" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" pod="openshift-marketplace/community-operators-ctq58" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ctq58\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.099891 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.100406 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.100698 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.101074 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.101313 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.101601 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.101980 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.102234 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.171757 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-klr5t" Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.171820 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-klr5t" Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.214847 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-klr5t" Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.215404 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.215770 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.216095 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.216298 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.216497 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.216683 4693 status_manager.go:851] "Failed to get status for pod" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" pod="openshift-marketplace/community-operators-ctq58" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ctq58\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.216830 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.216972 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:38 crc kubenswrapper[4693]: I1212 15:50:38.217164 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:39 crc kubenswrapper[4693]: E1212 15:50:39.063676 4693 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/events/oauth-openshift-558db77b4-wm8d5.18808291da98388c\": dial tcp 38.102.83.204:6443: connect: connection refused" event=< Dec 12 15:50:39 crc kubenswrapper[4693]: &Event{ObjectMeta:{oauth-openshift-558db77b4-wm8d5.18808291da98388c openshift-authentication 29455 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication,Name:oauth-openshift-558db77b4-wm8d5,UID:f743d3ca-28a7-4e25-955f-1385b9ef8c05,APIVersion:v1,ResourceVersion:27228,FieldPath:spec.containers{oauth-openshift},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.27:6443/healthz": dial tcp 10.217.0.27:6443: connect: connection refused Dec 12 15:50:39 crc kubenswrapper[4693]: body: Dec 12 15:50:39 crc kubenswrapper[4693]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 15:50:16 +0000 UTC,LastTimestamp:2025-12-12 15:50:26.877996371 +0000 UTC m=+254.046635972,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 15:50:39 crc kubenswrapper[4693]: > Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.076715 4693 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="a523c43b3fb96fa0893c8fe5ae1bd2cc47b72f8a42a979e7b15721ce5816a98c" exitCode=0 Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.076791 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"a523c43b3fb96fa0893c8fe5ae1bd2cc47b72f8a42a979e7b15721ce5816a98c"} Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.076970 4693 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c30e8235-7ceb-42a8-86d0-a1b89dd6cf07" Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.076988 4693 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c30e8235-7ceb-42a8-86d0-a1b89dd6cf07" Dec 12 15:50:39 crc kubenswrapper[4693]: E1212 15:50:39.077411 4693 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.077686 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.078173 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.078928 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.079260 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.079619 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.079967 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.080313 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.081799 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.082076 4693 status_manager.go:851] "Failed to get status for pod" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" pod="openshift-marketplace/community-operators-ctq58" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ctq58\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.120773 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-klr5t" Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.121385 4693 status_manager.go:851] "Failed to get status for pod" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" pod="openshift-marketplace/community-operators-ctq58" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-ctq58\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.121780 4693 status_manager.go:851] "Failed to get status for pod" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" pod="openshift-marketplace/certified-operators-p4cpj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p4cpj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.122084 4693 status_manager.go:851] "Failed to get status for pod" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" pod="openshift-authentication/oauth-openshift-558db77b4-wm8d5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-wm8d5\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.122324 4693 status_manager.go:851] "Failed to get status for pod" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" pod="openshift-marketplace/community-operators-fvk2k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-fvk2k\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.122533 4693 status_manager.go:851] "Failed to get status for pod" podUID="58415397-b1c4-41c4-abd4-518a27eda647" pod="openshift-marketplace/redhat-marketplace-kb99c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kb99c\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.122782 4693 status_manager.go:851] "Failed to get status for pod" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.123004 4693 status_manager.go:851] "Failed to get status for pod" podUID="1790176a-e8f5-4490-b020-53392f0475cc" pod="openshift-marketplace/redhat-marketplace-q4lmj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-q4lmj\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.123221 4693 status_manager.go:851] "Failed to get status for pod" podUID="6a461525-8c58-4454-b928-32dfc677061b" pod="openshift-marketplace/redhat-operators-klr5t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-klr5t\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:39 crc kubenswrapper[4693]: I1212 15:50:39.123451 4693 status_manager.go:851] "Failed to get status for pod" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" pod="openshift-marketplace/redhat-operators-zmcqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zmcqt\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 12 15:50:40 crc kubenswrapper[4693]: I1212 15:50:40.091717 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 12 15:50:40 crc kubenswrapper[4693]: I1212 15:50:40.092189 4693 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc" exitCode=1 Dec 12 15:50:40 crc kubenswrapper[4693]: I1212 15:50:40.092263 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc"} Dec 12 15:50:40 crc kubenswrapper[4693]: I1212 15:50:40.092956 4693 scope.go:117] "RemoveContainer" containerID="20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc" Dec 12 15:50:40 crc kubenswrapper[4693]: I1212 15:50:40.096897 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"38783fd2646d1244bbf2398e6dac9dfae13fe796f4a12fc0e255bd4bd0d5c2eb"} Dec 12 15:50:40 crc kubenswrapper[4693]: I1212 15:50:40.097046 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"af412531b236025b6b9e223886666a4cdfc870c131784f54ffe4406b127c0dc7"} Dec 12 15:50:40 crc kubenswrapper[4693]: I1212 15:50:40.097133 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"925af134bbb4986aca0338370807cf9525e50f9bb3062e0daa4f642e117709b1"} Dec 12 15:50:40 crc kubenswrapper[4693]: I1212 15:50:40.363170 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:50:40 crc kubenswrapper[4693]: I1212 15:50:40.694731 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:50:44 crc kubenswrapper[4693]: I1212 15:50:44.249444 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 12 15:50:44 crc kubenswrapper[4693]: I1212 15:50:44.249864 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1adb73c07e53bc378d1cea7ef797f24c5f8be2a84d6833262c2329d35ba64820"} Dec 12 15:50:44 crc kubenswrapper[4693]: I1212 15:50:44.254739 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7dd307d32c663a49fa8df508696cb68beb93717f21d888247d2f174a640910f2"} Dec 12 15:50:44 crc kubenswrapper[4693]: I1212 15:50:44.474812 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:50:44 crc kubenswrapper[4693]: I1212 15:50:44.723868 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ctq58" Dec 12 15:50:46 crc kubenswrapper[4693]: I1212 15:50:46.272523 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5a9f8884b09a0fad0e7f19fa270bd1687c999125454c1967a472497def1b28da"} Dec 12 15:50:46 crc kubenswrapper[4693]: I1212 15:50:46.272894 4693 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c30e8235-7ceb-42a8-86d0-a1b89dd6cf07" Dec 12 15:50:46 crc kubenswrapper[4693]: I1212 15:50:46.272922 4693 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c30e8235-7ceb-42a8-86d0-a1b89dd6cf07" Dec 12 15:50:46 crc kubenswrapper[4693]: I1212 15:50:46.281252 4693 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:50:46 crc kubenswrapper[4693]: I1212 15:50:46.296126 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925af134bbb4986aca0338370807cf9525e50f9bb3062e0daa4f642e117709b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:50:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38783fd2646d1244bbf2398e6dac9dfae13fe796f4a12fc0e255bd4bd0d5c2eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:50:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af412531b236025b6b9e223886666a4cdfc870c131784f54ffe4406b127c0dc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:50:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9f8884b09a0fad0e7f19fa270bd1687c999125454c1967a472497def1b28da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dd307d32c663a49fa8df508696cb68beb93717f21d888247d2f174a640910f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T15:50:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Pod \"kube-apiserver-crc\" is invalid: metadata.uid: Invalid value: \"c30e8235-7ceb-42a8-86d0-a1b89dd6cf07\": field is immutable" Dec 12 15:50:47 crc kubenswrapper[4693]: I1212 15:50:47.277767 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:50:47 crc kubenswrapper[4693]: I1212 15:50:47.277822 4693 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c30e8235-7ceb-42a8-86d0-a1b89dd6cf07" Dec 12 15:50:47 crc kubenswrapper[4693]: I1212 15:50:47.278088 4693 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c30e8235-7ceb-42a8-86d0-a1b89dd6cf07" Dec 12 15:50:47 crc kubenswrapper[4693]: I1212 15:50:47.379107 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:50:47 crc kubenswrapper[4693]: I1212 15:50:47.379154 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:50:47 crc kubenswrapper[4693]: I1212 15:50:47.382958 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:50:47 crc kubenswrapper[4693]: I1212 15:50:47.385829 4693 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="af16d9bd-44c5-41e6-ba37-a0271764d65b" Dec 12 15:50:48 crc kubenswrapper[4693]: I1212 15:50:48.284329 4693 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c30e8235-7ceb-42a8-86d0-a1b89dd6cf07" Dec 12 15:50:48 crc kubenswrapper[4693]: I1212 15:50:48.284357 4693 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c30e8235-7ceb-42a8-86d0-a1b89dd6cf07" Dec 12 15:50:48 crc kubenswrapper[4693]: I1212 15:50:48.287877 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:50:49 crc kubenswrapper[4693]: I1212 15:50:49.289038 4693 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c30e8235-7ceb-42a8-86d0-a1b89dd6cf07" Dec 12 15:50:49 crc kubenswrapper[4693]: I1212 15:50:49.289430 4693 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c30e8235-7ceb-42a8-86d0-a1b89dd6cf07" Dec 12 15:50:50 crc kubenswrapper[4693]: I1212 15:50:50.294429 4693 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c30e8235-7ceb-42a8-86d0-a1b89dd6cf07" Dec 12 15:50:50 crc kubenswrapper[4693]: I1212 15:50:50.294473 4693 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c30e8235-7ceb-42a8-86d0-a1b89dd6cf07" Dec 12 15:50:50 crc kubenswrapper[4693]: I1212 15:50:50.362861 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:50:50 crc kubenswrapper[4693]: I1212 15:50:50.367518 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:50:53 crc kubenswrapper[4693]: I1212 15:50:53.384025 4693 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="af16d9bd-44c5-41e6-ba37-a0271764d65b" Dec 12 15:50:54 crc kubenswrapper[4693]: I1212 15:50:54.481300 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 15:50:56 crc kubenswrapper[4693]: I1212 15:50:56.048185 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 12 15:50:56 crc kubenswrapper[4693]: I1212 15:50:56.768986 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 12 15:50:56 crc kubenswrapper[4693]: I1212 15:50:56.812534 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 12 15:50:56 crc kubenswrapper[4693]: I1212 15:50:56.919915 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 12 15:50:57 crc kubenswrapper[4693]: I1212 15:50:57.196059 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 12 15:50:57 crc kubenswrapper[4693]: I1212 15:50:57.201626 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 12 15:50:57 crc kubenswrapper[4693]: I1212 15:50:57.275821 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 12 15:50:57 crc kubenswrapper[4693]: I1212 15:50:57.754673 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 12 15:50:57 crc kubenswrapper[4693]: I1212 15:50:57.758736 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 12 15:50:57 crc kubenswrapper[4693]: I1212 15:50:57.883558 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 12 15:50:58 crc kubenswrapper[4693]: I1212 15:50:58.043022 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 12 15:50:58 crc kubenswrapper[4693]: I1212 15:50:58.159429 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 12 15:50:58 crc kubenswrapper[4693]: I1212 15:50:58.373156 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 12 15:50:58 crc kubenswrapper[4693]: I1212 15:50:58.574378 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 12 15:50:58 crc kubenswrapper[4693]: I1212 15:50:58.581521 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 12 15:50:58 crc kubenswrapper[4693]: I1212 15:50:58.655447 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 12 15:50:58 crc kubenswrapper[4693]: I1212 15:50:58.735682 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 12 15:50:58 crc kubenswrapper[4693]: I1212 15:50:58.943785 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 12 15:50:59 crc kubenswrapper[4693]: I1212 15:50:59.104770 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 12 15:50:59 crc kubenswrapper[4693]: I1212 15:50:59.212774 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 12 15:50:59 crc kubenswrapper[4693]: I1212 15:50:59.279062 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 12 15:50:59 crc kubenswrapper[4693]: I1212 15:50:59.416378 4693 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 12 15:50:59 crc kubenswrapper[4693]: I1212 15:50:59.417718 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fvk2k" podStartSLOduration=32.416036569 podStartE2EDuration="2m6.417692885s" podCreationTimestamp="2025-12-12 15:48:53 +0000 UTC" firstStartedPulling="2025-12-12 15:48:55.591376448 +0000 UTC m=+162.760016049" lastFinishedPulling="2025-12-12 15:50:29.593032764 +0000 UTC m=+256.761672365" observedRunningTime="2025-12-12 15:50:46.147979935 +0000 UTC m=+273.316619556" watchObservedRunningTime="2025-12-12 15:50:59.417692885 +0000 UTC m=+286.586332506" Dec 12 15:50:59 crc kubenswrapper[4693]: I1212 15:50:59.419115 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ctq58" podStartSLOduration=29.524768041 podStartE2EDuration="2m5.419105715s" podCreationTimestamp="2025-12-12 15:48:54 +0000 UTC" firstStartedPulling="2025-12-12 15:48:55.594853857 +0000 UTC m=+162.763493458" lastFinishedPulling="2025-12-12 15:50:31.489191531 +0000 UTC m=+258.657831132" observedRunningTime="2025-12-12 15:50:46.093625513 +0000 UTC m=+273.262265114" watchObservedRunningTime="2025-12-12 15:50:59.419105715 +0000 UTC m=+286.587745336" Dec 12 15:50:59 crc kubenswrapper[4693]: I1212 15:50:59.419259 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q4lmj" podStartSLOduration=30.912992534 podStartE2EDuration="2m3.419253389s" podCreationTimestamp="2025-12-12 15:48:56 +0000 UTC" firstStartedPulling="2025-12-12 15:48:58.906325405 +0000 UTC m=+166.074965006" lastFinishedPulling="2025-12-12 15:50:31.41258626 +0000 UTC m=+258.581225861" observedRunningTime="2025-12-12 15:50:46.058398587 +0000 UTC m=+273.227038188" watchObservedRunningTime="2025-12-12 15:50:59.419253389 +0000 UTC m=+286.587893000" Dec 12 15:50:59 crc kubenswrapper[4693]: I1212 15:50:59.420863 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kb99c" podStartSLOduration=32.597737137 podStartE2EDuration="2m3.420853775s" podCreationTimestamp="2025-12-12 15:48:56 +0000 UTC" firstStartedPulling="2025-12-12 15:48:58.769914896 +0000 UTC m=+165.938554497" lastFinishedPulling="2025-12-12 15:50:29.593031534 +0000 UTC m=+256.761671135" observedRunningTime="2025-12-12 15:50:46.004978282 +0000 UTC m=+273.173617883" watchObservedRunningTime="2025-12-12 15:50:59.420853775 +0000 UTC m=+286.589493386" Dec 12 15:50:59 crc kubenswrapper[4693]: I1212 15:50:59.422404 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-klr5t" podStartSLOduration=30.752395294 podStartE2EDuration="2m2.422395589s" podCreationTimestamp="2025-12-12 15:48:57 +0000 UTC" firstStartedPulling="2025-12-12 15:48:59.9759225 +0000 UTC m=+167.144562101" lastFinishedPulling="2025-12-12 15:50:31.645922795 +0000 UTC m=+258.814562396" observedRunningTime="2025-12-12 15:50:46.069667319 +0000 UTC m=+273.238306920" watchObservedRunningTime="2025-12-12 15:50:59.422395589 +0000 UTC m=+286.591035200" Dec 12 15:50:59 crc kubenswrapper[4693]: I1212 15:50:59.422513 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zmcqt" podStartSLOduration=31.636209903 podStartE2EDuration="2m2.422507912s" podCreationTimestamp="2025-12-12 15:48:57 +0000 UTC" firstStartedPulling="2025-12-12 15:48:58.84238893 +0000 UTC m=+166.011028531" lastFinishedPulling="2025-12-12 15:50:29.628686939 +0000 UTC m=+256.797326540" observedRunningTime="2025-12-12 15:50:46.08160818 +0000 UTC m=+273.250247781" watchObservedRunningTime="2025-12-12 15:50:59.422507912 +0000 UTC m=+286.591147523" Dec 12 15:50:59 crc kubenswrapper[4693]: I1212 15:50:59.424461 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-wm8d5","openshift-marketplace/certified-operators-p4cpj"] Dec 12 15:50:59 crc kubenswrapper[4693]: I1212 15:50:59.424547 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 15:50:59 crc kubenswrapper[4693]: I1212 15:50:59.428488 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 12 15:50:59 crc kubenswrapper[4693]: I1212 15:50:59.430767 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 15:50:59 crc kubenswrapper[4693]: I1212 15:50:59.443135 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.443116121 podStartE2EDuration="13.443116121s" podCreationTimestamp="2025-12-12 15:50:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:50:59.442142993 +0000 UTC m=+286.610782594" watchObservedRunningTime="2025-12-12 15:50:59.443116121 +0000 UTC m=+286.611755742" Dec 12 15:50:59 crc kubenswrapper[4693]: I1212 15:50:59.578981 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 12 15:50:59 crc kubenswrapper[4693]: I1212 15:50:59.620674 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 12 15:50:59 crc kubenswrapper[4693]: I1212 15:50:59.678682 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 12 15:50:59 crc kubenswrapper[4693]: I1212 15:50:59.763175 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 12 15:50:59 crc kubenswrapper[4693]: I1212 15:50:59.818097 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 12 15:50:59 crc kubenswrapper[4693]: I1212 15:50:59.882262 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 12 15:51:00 crc kubenswrapper[4693]: I1212 15:51:00.007443 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 12 15:51:00 crc kubenswrapper[4693]: I1212 15:51:00.333010 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 12 15:51:00 crc kubenswrapper[4693]: I1212 15:51:00.390675 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 12 15:51:00 crc kubenswrapper[4693]: I1212 15:51:00.430818 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 12 15:51:00 crc kubenswrapper[4693]: I1212 15:51:00.559443 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 12 15:51:00 crc kubenswrapper[4693]: I1212 15:51:00.676377 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 12 15:51:00 crc kubenswrapper[4693]: I1212 15:51:00.768489 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 12 15:51:00 crc kubenswrapper[4693]: I1212 15:51:00.946868 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 12 15:51:00 crc kubenswrapper[4693]: I1212 15:51:00.970775 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.021922 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.052939 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.063366 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.172135 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.302235 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.310320 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.360492 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.368069 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" path="/var/lib/kubelet/pods/38d663d8-7b9e-4685-9b27-cdf525b225af/volumes" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.369246 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" path="/var/lib/kubelet/pods/f743d3ca-28a7-4e25-955f-1385b9ef8c05/volumes" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.378697 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.445657 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-db548d47c-z22tr"] Dec 12 15:51:01 crc kubenswrapper[4693]: E1212 15:51:01.445997 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" containerName="extract-utilities" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.446022 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" containerName="extract-utilities" Dec 12 15:51:01 crc kubenswrapper[4693]: E1212 15:51:01.446051 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" containerName="oauth-openshift" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.446065 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" containerName="oauth-openshift" Dec 12 15:51:01 crc kubenswrapper[4693]: E1212 15:51:01.446090 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" containerName="installer" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.446099 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" containerName="installer" Dec 12 15:51:01 crc kubenswrapper[4693]: E1212 15:51:01.446113 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" containerName="extract-content" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.446122 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" containerName="extract-content" Dec 12 15:51:01 crc kubenswrapper[4693]: E1212 15:51:01.446136 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" containerName="registry-server" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.446144 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" containerName="registry-server" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.446304 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffb66d17-57a0-4b72-803c-0daec41c6e72" containerName="installer" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.446328 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="f743d3ca-28a7-4e25-955f-1385b9ef8c05" containerName="oauth-openshift" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.446343 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d663d8-7b9e-4685-9b27-cdf525b225af" containerName="registry-server" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.446952 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.450889 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.450950 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.451232 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.454540 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.454562 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.454705 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.455178 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.461720 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.461761 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.477679 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.477679 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.478926 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.479906 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.484043 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.487859 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.492561 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.500384 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.530793 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.544898 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.597814 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.597865 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-system-router-certs\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.597887 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-user-template-error\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.597905 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.597924 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/70395fde-23f6-41b0-a04e-c4568b405e9d-audit-policies\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.597953 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70395fde-23f6-41b0-a04e-c4568b405e9d-audit-dir\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.597970 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.597986 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-system-session\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.598065 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.598116 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.598155 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64xqc\" (UniqueName: \"kubernetes.io/projected/70395fde-23f6-41b0-a04e-c4568b405e9d-kube-api-access-64xqc\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.598184 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-system-service-ca\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.598266 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-user-template-login\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.598328 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.673719 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.700664 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.702430 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.702513 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.702551 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64xqc\" (UniqueName: \"kubernetes.io/projected/70395fde-23f6-41b0-a04e-c4568b405e9d-kube-api-access-64xqc\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.702577 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-system-service-ca\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.702620 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-user-template-login\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.702641 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.702661 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.702683 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-system-router-certs\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.702702 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-user-template-error\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.702727 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.702752 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/70395fde-23f6-41b0-a04e-c4568b405e9d-audit-policies\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.702775 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70395fde-23f6-41b0-a04e-c4568b405e9d-audit-dir\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.702796 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.702819 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-system-session\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.704100 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.704345 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70395fde-23f6-41b0-a04e-c4568b405e9d-audit-dir\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.705695 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-system-service-ca\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.706706 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.707737 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/70395fde-23f6-41b0-a04e-c4568b405e9d-audit-policies\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.714662 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.715128 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.715191 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.715432 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-system-router-certs\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.716181 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.724513 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-system-session\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.725163 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-user-template-login\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.727808 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/70395fde-23f6-41b0-a04e-c4568b405e9d-v4-0-config-user-template-error\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.734487 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64xqc\" (UniqueName: \"kubernetes.io/projected/70395fde-23f6-41b0-a04e-c4568b405e9d-kube-api-access-64xqc\") pod \"oauth-openshift-db548d47c-z22tr\" (UID: \"70395fde-23f6-41b0-a04e-c4568b405e9d\") " pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.756314 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.783967 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.806684 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 12 15:51:01 crc kubenswrapper[4693]: I1212 15:51:01.877914 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 12 15:51:02 crc kubenswrapper[4693]: I1212 15:51:02.057210 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 12 15:51:02 crc kubenswrapper[4693]: I1212 15:51:02.112209 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 12 15:51:02 crc kubenswrapper[4693]: I1212 15:51:02.135768 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 12 15:51:02 crc kubenswrapper[4693]: I1212 15:51:02.140449 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 12 15:51:02 crc kubenswrapper[4693]: I1212 15:51:02.188859 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 12 15:51:02 crc kubenswrapper[4693]: I1212 15:51:02.283220 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 12 15:51:02 crc kubenswrapper[4693]: I1212 15:51:02.327538 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 12 15:51:02 crc kubenswrapper[4693]: I1212 15:51:02.349242 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 12 15:51:02 crc kubenswrapper[4693]: I1212 15:51:02.380163 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 12 15:51:02 crc kubenswrapper[4693]: I1212 15:51:02.394909 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 12 15:51:02 crc kubenswrapper[4693]: I1212 15:51:02.415002 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 12 15:51:02 crc kubenswrapper[4693]: I1212 15:51:02.439551 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 12 15:51:02 crc kubenswrapper[4693]: I1212 15:51:02.586941 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 12 15:51:02 crc kubenswrapper[4693]: I1212 15:51:02.685840 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 12 15:51:02 crc kubenswrapper[4693]: I1212 15:51:02.730619 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 12 15:51:02 crc kubenswrapper[4693]: I1212 15:51:02.776424 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 12 15:51:02 crc kubenswrapper[4693]: I1212 15:51:02.828150 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 12 15:51:02 crc kubenswrapper[4693]: I1212 15:51:02.851500 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 12 15:51:02 crc kubenswrapper[4693]: I1212 15:51:02.928491 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 12 15:51:02 crc kubenswrapper[4693]: I1212 15:51:02.972642 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 12 15:51:03 crc kubenswrapper[4693]: I1212 15:51:03.170620 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 12 15:51:03 crc kubenswrapper[4693]: I1212 15:51:03.229573 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 12 15:51:03 crc kubenswrapper[4693]: I1212 15:51:03.273543 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 12 15:51:03 crc kubenswrapper[4693]: I1212 15:51:03.276044 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 12 15:51:03 crc kubenswrapper[4693]: I1212 15:51:03.281722 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 12 15:51:03 crc kubenswrapper[4693]: I1212 15:51:03.368162 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 12 15:51:03 crc kubenswrapper[4693]: I1212 15:51:03.383715 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 12 15:51:03 crc kubenswrapper[4693]: I1212 15:51:03.547297 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 12 15:51:03 crc kubenswrapper[4693]: I1212 15:51:03.561734 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 12 15:51:03 crc kubenswrapper[4693]: I1212 15:51:03.669060 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 12 15:51:03 crc kubenswrapper[4693]: I1212 15:51:03.759490 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 12 15:51:03 crc kubenswrapper[4693]: I1212 15:51:03.769534 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 12 15:51:03 crc kubenswrapper[4693]: I1212 15:51:03.802851 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 12 15:51:03 crc kubenswrapper[4693]: I1212 15:51:03.806360 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 12 15:51:04 crc kubenswrapper[4693]: I1212 15:51:04.055876 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 12 15:51:04 crc kubenswrapper[4693]: I1212 15:51:04.184519 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 12 15:51:04 crc kubenswrapper[4693]: I1212 15:51:04.300125 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 12 15:51:04 crc kubenswrapper[4693]: I1212 15:51:04.380707 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 12 15:51:04 crc kubenswrapper[4693]: I1212 15:51:04.470774 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 12 15:51:04 crc kubenswrapper[4693]: I1212 15:51:04.470827 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 12 15:51:04 crc kubenswrapper[4693]: I1212 15:51:04.546138 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 12 15:51:04 crc kubenswrapper[4693]: I1212 15:51:04.587542 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 12 15:51:04 crc kubenswrapper[4693]: I1212 15:51:04.591905 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 12 15:51:04 crc kubenswrapper[4693]: I1212 15:51:04.767659 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 12 15:51:04 crc kubenswrapper[4693]: I1212 15:51:04.777800 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 12 15:51:04 crc kubenswrapper[4693]: I1212 15:51:04.888774 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 12 15:51:04 crc kubenswrapper[4693]: I1212 15:51:04.936004 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 12 15:51:04 crc kubenswrapper[4693]: I1212 15:51:04.938338 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 12 15:51:04 crc kubenswrapper[4693]: I1212 15:51:04.999097 4693 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 12 15:51:05 crc kubenswrapper[4693]: I1212 15:51:05.008256 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 12 15:51:05 crc kubenswrapper[4693]: I1212 15:51:05.015103 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 12 15:51:05 crc kubenswrapper[4693]: I1212 15:51:05.016921 4693 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 12 15:51:05 crc kubenswrapper[4693]: I1212 15:51:05.020894 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 12 15:51:05 crc kubenswrapper[4693]: I1212 15:51:05.054902 4693 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 12 15:51:05 crc kubenswrapper[4693]: I1212 15:51:05.100125 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 12 15:51:05 crc kubenswrapper[4693]: I1212 15:51:05.181125 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 12 15:51:05 crc kubenswrapper[4693]: I1212 15:51:05.209056 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 12 15:51:05 crc kubenswrapper[4693]: I1212 15:51:05.210907 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 12 15:51:05 crc kubenswrapper[4693]: I1212 15:51:05.405842 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 12 15:51:05 crc kubenswrapper[4693]: I1212 15:51:05.524928 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 12 15:51:05 crc kubenswrapper[4693]: I1212 15:51:05.678131 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 12 15:51:05 crc kubenswrapper[4693]: I1212 15:51:05.695538 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 12 15:51:05 crc kubenswrapper[4693]: I1212 15:51:05.713826 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 12 15:51:05 crc kubenswrapper[4693]: I1212 15:51:05.736219 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 12 15:51:05 crc kubenswrapper[4693]: I1212 15:51:05.806041 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 12 15:51:05 crc kubenswrapper[4693]: I1212 15:51:05.841484 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 12 15:51:05 crc kubenswrapper[4693]: I1212 15:51:05.924765 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 12 15:51:05 crc kubenswrapper[4693]: I1212 15:51:05.947656 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 12 15:51:05 crc kubenswrapper[4693]: I1212 15:51:05.994434 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 12 15:51:06 crc kubenswrapper[4693]: I1212 15:51:06.012143 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 12 15:51:06 crc kubenswrapper[4693]: I1212 15:51:06.094065 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 12 15:51:06 crc kubenswrapper[4693]: I1212 15:51:06.096501 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 12 15:51:06 crc kubenswrapper[4693]: I1212 15:51:06.175414 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 12 15:51:06 crc kubenswrapper[4693]: I1212 15:51:06.304995 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 12 15:51:06 crc kubenswrapper[4693]: I1212 15:51:06.335022 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 12 15:51:06 crc kubenswrapper[4693]: I1212 15:51:06.417648 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 12 15:51:06 crc kubenswrapper[4693]: I1212 15:51:06.438854 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 12 15:51:06 crc kubenswrapper[4693]: I1212 15:51:06.484584 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 12 15:51:06 crc kubenswrapper[4693]: I1212 15:51:06.496516 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 12 15:51:06 crc kubenswrapper[4693]: I1212 15:51:06.527517 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 12 15:51:06 crc kubenswrapper[4693]: I1212 15:51:06.553013 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 12 15:51:06 crc kubenswrapper[4693]: I1212 15:51:06.639143 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 12 15:51:06 crc kubenswrapper[4693]: I1212 15:51:06.699184 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 12 15:51:06 crc kubenswrapper[4693]: I1212 15:51:06.741405 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 12 15:51:06 crc kubenswrapper[4693]: I1212 15:51:06.877017 4693 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 12 15:51:06 crc kubenswrapper[4693]: I1212 15:51:06.889886 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 12 15:51:06 crc kubenswrapper[4693]: I1212 15:51:06.909252 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 12 15:51:07 crc kubenswrapper[4693]: I1212 15:51:07.073610 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 12 15:51:07 crc kubenswrapper[4693]: I1212 15:51:07.106767 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 12 15:51:07 crc kubenswrapper[4693]: I1212 15:51:07.224284 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 12 15:51:07 crc kubenswrapper[4693]: I1212 15:51:07.253251 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-db548d47c-z22tr"] Dec 12 15:51:07 crc kubenswrapper[4693]: I1212 15:51:07.265036 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 12 15:51:07 crc kubenswrapper[4693]: I1212 15:51:07.281854 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 12 15:51:07 crc kubenswrapper[4693]: I1212 15:51:07.357390 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 12 15:51:07 crc kubenswrapper[4693]: I1212 15:51:07.373916 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 12 15:51:07 crc kubenswrapper[4693]: I1212 15:51:07.418995 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 12 15:51:07 crc kubenswrapper[4693]: I1212 15:51:07.428635 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 12 15:51:07 crc kubenswrapper[4693]: I1212 15:51:07.476620 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-db548d47c-z22tr"] Dec 12 15:51:07 crc kubenswrapper[4693]: I1212 15:51:07.599493 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 12 15:51:07 crc kubenswrapper[4693]: I1212 15:51:07.639880 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 12 15:51:07 crc kubenswrapper[4693]: I1212 15:51:07.662789 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 12 15:51:07 crc kubenswrapper[4693]: I1212 15:51:07.712595 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 12 15:51:07 crc kubenswrapper[4693]: I1212 15:51:07.754095 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 12 15:51:07 crc kubenswrapper[4693]: I1212 15:51:07.760493 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 12 15:51:07 crc kubenswrapper[4693]: I1212 15:51:07.838875 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 12 15:51:07 crc kubenswrapper[4693]: I1212 15:51:07.866108 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 12 15:51:07 crc kubenswrapper[4693]: I1212 15:51:07.880605 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 12 15:51:07 crc kubenswrapper[4693]: I1212 15:51:07.931245 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.132943 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.215673 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.304187 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.317043 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.342403 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.360164 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.383762 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.400930 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" event={"ID":"70395fde-23f6-41b0-a04e-c4568b405e9d","Type":"ContainerStarted","Data":"1b9d77c1cbf5e73bd221f286b168fe2c4a8d6d3a0409c7e41383af29dd1358b6"} Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.400970 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" event={"ID":"70395fde-23f6-41b0-a04e-c4568b405e9d","Type":"ContainerStarted","Data":"142ae024f86056186e6e1515c06b4365dc88e7030c1785bf2f3e5efedf754c1a"} Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.401524 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.406077 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.420469 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" podStartSLOduration=78.420453275 podStartE2EDuration="1m18.420453275s" podCreationTimestamp="2025-12-12 15:49:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:51:08.417099729 +0000 UTC m=+295.585739340" watchObservedRunningTime="2025-12-12 15:51:08.420453275 +0000 UTC m=+295.589092876" Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.483392 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.491496 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.584658 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.709663 4693 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.710026 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://dfce17445ea735432f77f51f29e615d00e597f00c6e937e475bd626e9f418a32" gracePeriod=5 Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.717714 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.748471 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.761340 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.769244 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.876058 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.892127 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 12 15:51:08 crc kubenswrapper[4693]: I1212 15:51:08.956213 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 12 15:51:09 crc kubenswrapper[4693]: I1212 15:51:09.152680 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 12 15:51:09 crc kubenswrapper[4693]: I1212 15:51:09.188681 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 12 15:51:09 crc kubenswrapper[4693]: I1212 15:51:09.200959 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 12 15:51:09 crc kubenswrapper[4693]: I1212 15:51:09.317864 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 12 15:51:09 crc kubenswrapper[4693]: I1212 15:51:09.409722 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 12 15:51:09 crc kubenswrapper[4693]: I1212 15:51:09.449303 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 12 15:51:09 crc kubenswrapper[4693]: I1212 15:51:09.526852 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 12 15:51:09 crc kubenswrapper[4693]: I1212 15:51:09.570255 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 12 15:51:09 crc kubenswrapper[4693]: I1212 15:51:09.575500 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 12 15:51:09 crc kubenswrapper[4693]: I1212 15:51:09.577439 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 12 15:51:09 crc kubenswrapper[4693]: I1212 15:51:09.689096 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 12 15:51:09 crc kubenswrapper[4693]: I1212 15:51:09.706799 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 12 15:51:09 crc kubenswrapper[4693]: I1212 15:51:09.825022 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 12 15:51:09 crc kubenswrapper[4693]: I1212 15:51:09.833908 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 12 15:51:09 crc kubenswrapper[4693]: I1212 15:51:09.854529 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 12 15:51:09 crc kubenswrapper[4693]: I1212 15:51:09.966063 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 12 15:51:09 crc kubenswrapper[4693]: I1212 15:51:09.971957 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 12 15:51:09 crc kubenswrapper[4693]: I1212 15:51:09.995623 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 12 15:51:10 crc kubenswrapper[4693]: I1212 15:51:10.033577 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 12 15:51:10 crc kubenswrapper[4693]: I1212 15:51:10.049716 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 12 15:51:10 crc kubenswrapper[4693]: I1212 15:51:10.075103 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 12 15:51:10 crc kubenswrapper[4693]: I1212 15:51:10.128489 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 12 15:51:10 crc kubenswrapper[4693]: I1212 15:51:10.137799 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 12 15:51:10 crc kubenswrapper[4693]: I1212 15:51:10.138403 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 12 15:51:10 crc kubenswrapper[4693]: I1212 15:51:10.252048 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 12 15:51:10 crc kubenswrapper[4693]: I1212 15:51:10.263874 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 12 15:51:10 crc kubenswrapper[4693]: I1212 15:51:10.300763 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 12 15:51:10 crc kubenswrapper[4693]: I1212 15:51:10.468236 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 12 15:51:10 crc kubenswrapper[4693]: I1212 15:51:10.496618 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 12 15:51:10 crc kubenswrapper[4693]: I1212 15:51:10.583611 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 12 15:51:10 crc kubenswrapper[4693]: I1212 15:51:10.708251 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 12 15:51:10 crc kubenswrapper[4693]: I1212 15:51:10.769661 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 12 15:51:10 crc kubenswrapper[4693]: I1212 15:51:10.918825 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 12 15:51:11 crc kubenswrapper[4693]: I1212 15:51:11.044337 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 12 15:51:11 crc kubenswrapper[4693]: I1212 15:51:11.074570 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 12 15:51:11 crc kubenswrapper[4693]: I1212 15:51:11.226321 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 12 15:51:11 crc kubenswrapper[4693]: I1212 15:51:11.229153 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 12 15:51:11 crc kubenswrapper[4693]: I1212 15:51:11.316040 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 12 15:51:11 crc kubenswrapper[4693]: I1212 15:51:11.497186 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 12 15:51:11 crc kubenswrapper[4693]: I1212 15:51:11.560583 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 12 15:51:11 crc kubenswrapper[4693]: I1212 15:51:11.564386 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 12 15:51:11 crc kubenswrapper[4693]: I1212 15:51:11.648816 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 12 15:51:11 crc kubenswrapper[4693]: I1212 15:51:11.722007 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 12 15:51:11 crc kubenswrapper[4693]: I1212 15:51:11.734042 4693 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 12 15:51:11 crc kubenswrapper[4693]: I1212 15:51:11.792043 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 12 15:51:11 crc kubenswrapper[4693]: I1212 15:51:11.868907 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 12 15:51:11 crc kubenswrapper[4693]: I1212 15:51:11.988475 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 12 15:51:12 crc kubenswrapper[4693]: I1212 15:51:12.114399 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 12 15:51:12 crc kubenswrapper[4693]: I1212 15:51:12.235070 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 12 15:51:12 crc kubenswrapper[4693]: I1212 15:51:12.286676 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 12 15:51:12 crc kubenswrapper[4693]: I1212 15:51:12.343594 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 12 15:51:12 crc kubenswrapper[4693]: I1212 15:51:12.433917 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 12 15:51:12 crc kubenswrapper[4693]: I1212 15:51:12.868542 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 12 15:51:12 crc kubenswrapper[4693]: I1212 15:51:12.991533 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 12 15:51:13 crc kubenswrapper[4693]: I1212 15:51:13.133192 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 12 15:51:13 crc kubenswrapper[4693]: I1212 15:51:13.325931 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 12 15:51:13 crc kubenswrapper[4693]: I1212 15:51:13.377248 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.296994 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.297435 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.397489 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.397561 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.397637 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.397668 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.397709 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.397751 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.397792 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.397804 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.397914 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.398111 4693 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.398127 4693 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.398142 4693 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.398154 4693 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.406655 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.440264 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.440361 4693 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="dfce17445ea735432f77f51f29e615d00e597f00c6e937e475bd626e9f418a32" exitCode=137 Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.440421 4693 scope.go:117] "RemoveContainer" containerID="dfce17445ea735432f77f51f29e615d00e597f00c6e937e475bd626e9f418a32" Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.440441 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.462600 4693 scope.go:117] "RemoveContainer" containerID="dfce17445ea735432f77f51f29e615d00e597f00c6e937e475bd626e9f418a32" Dec 12 15:51:14 crc kubenswrapper[4693]: E1212 15:51:14.463147 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfce17445ea735432f77f51f29e615d00e597f00c6e937e475bd626e9f418a32\": container with ID starting with dfce17445ea735432f77f51f29e615d00e597f00c6e937e475bd626e9f418a32 not found: ID does not exist" containerID="dfce17445ea735432f77f51f29e615d00e597f00c6e937e475bd626e9f418a32" Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.463216 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfce17445ea735432f77f51f29e615d00e597f00c6e937e475bd626e9f418a32"} err="failed to get container status \"dfce17445ea735432f77f51f29e615d00e597f00c6e937e475bd626e9f418a32\": rpc error: code = NotFound desc = could not find container \"dfce17445ea735432f77f51f29e615d00e597f00c6e937e475bd626e9f418a32\": container with ID starting with dfce17445ea735432f77f51f29e615d00e597f00c6e937e475bd626e9f418a32 not found: ID does not exist" Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.500183 4693 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.654752 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 12 15:51:14 crc kubenswrapper[4693]: I1212 15:51:14.966023 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 12 15:51:15 crc kubenswrapper[4693]: I1212 15:51:15.369630 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 12 15:51:30 crc kubenswrapper[4693]: I1212 15:51:30.477946 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-88srp"] Dec 12 15:51:30 crc kubenswrapper[4693]: I1212 15:51:30.478738 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" podUID="f8c75cc9-2bff-43c4-b8c8-838b67ea4874" containerName="controller-manager" containerID="cri-o://439674ffc15f1bd26f24b53911af69afdd189f71348eaa1d5192ad169b85fb5a" gracePeriod=30 Dec 12 15:51:30 crc kubenswrapper[4693]: I1212 15:51:30.577365 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6"] Dec 12 15:51:30 crc kubenswrapper[4693]: I1212 15:51:30.577602 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" podUID="b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112" containerName="route-controller-manager" containerID="cri-o://a3bcd1d51e62856ef6ca0239ee5c9ddf9d1488a0b7f21ca8e1262244a8738102" gracePeriod=30 Dec 12 15:51:30 crc kubenswrapper[4693]: I1212 15:51:30.801519 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" Dec 12 15:51:30 crc kubenswrapper[4693]: I1212 15:51:30.813721 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-config\") pod \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\" (UID: \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\") " Dec 12 15:51:30 crc kubenswrapper[4693]: I1212 15:51:30.813770 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-client-ca\") pod \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\" (UID: \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\") " Dec 12 15:51:30 crc kubenswrapper[4693]: I1212 15:51:30.814760 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-client-ca" (OuterVolumeSpecName: "client-ca") pod "f8c75cc9-2bff-43c4-b8c8-838b67ea4874" (UID: "f8c75cc9-2bff-43c4-b8c8-838b67ea4874"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:51:30 crc kubenswrapper[4693]: I1212 15:51:30.814791 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-config" (OuterVolumeSpecName: "config") pod "f8c75cc9-2bff-43c4-b8c8-838b67ea4874" (UID: "f8c75cc9-2bff-43c4-b8c8-838b67ea4874"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:51:30 crc kubenswrapper[4693]: I1212 15:51:30.883671 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" Dec 12 15:51:30 crc kubenswrapper[4693]: I1212 15:51:30.914530 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-serving-cert\") pod \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\" (UID: \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\") " Dec 12 15:51:30 crc kubenswrapper[4693]: I1212 15:51:30.914592 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-proxy-ca-bundles\") pod \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\" (UID: \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\") " Dec 12 15:51:30 crc kubenswrapper[4693]: I1212 15:51:30.914619 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hts9j\" (UniqueName: \"kubernetes.io/projected/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-kube-api-access-hts9j\") pod \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\" (UID: \"f8c75cc9-2bff-43c4-b8c8-838b67ea4874\") " Dec 12 15:51:30 crc kubenswrapper[4693]: I1212 15:51:30.914765 4693 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:30 crc kubenswrapper[4693]: I1212 15:51:30.914780 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:30 crc kubenswrapper[4693]: I1212 15:51:30.916245 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f8c75cc9-2bff-43c4-b8c8-838b67ea4874" (UID: "f8c75cc9-2bff-43c4-b8c8-838b67ea4874"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:51:30 crc kubenswrapper[4693]: I1212 15:51:30.920653 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-kube-api-access-hts9j" (OuterVolumeSpecName: "kube-api-access-hts9j") pod "f8c75cc9-2bff-43c4-b8c8-838b67ea4874" (UID: "f8c75cc9-2bff-43c4-b8c8-838b67ea4874"). InnerVolumeSpecName "kube-api-access-hts9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:51:30 crc kubenswrapper[4693]: I1212 15:51:30.920754 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f8c75cc9-2bff-43c4-b8c8-838b67ea4874" (UID: "f8c75cc9-2bff-43c4-b8c8-838b67ea4874"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.015279 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt4x5\" (UniqueName: \"kubernetes.io/projected/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-kube-api-access-qt4x5\") pod \"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112\" (UID: \"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112\") " Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.015397 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-config\") pod \"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112\" (UID: \"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112\") " Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.015472 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-client-ca\") pod \"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112\" (UID: \"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112\") " Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.015536 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-serving-cert\") pod \"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112\" (UID: \"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112\") " Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.015795 4693 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.015814 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hts9j\" (UniqueName: \"kubernetes.io/projected/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-kube-api-access-hts9j\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.015827 4693 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8c75cc9-2bff-43c4-b8c8-838b67ea4874-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.016320 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-client-ca" (OuterVolumeSpecName: "client-ca") pod "b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112" (UID: "b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.016527 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-config" (OuterVolumeSpecName: "config") pod "b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112" (UID: "b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.018895 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-kube-api-access-qt4x5" (OuterVolumeSpecName: "kube-api-access-qt4x5") pod "b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112" (UID: "b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112"). InnerVolumeSpecName "kube-api-access-qt4x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.019692 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112" (UID: "b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.117554 4693 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.117609 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt4x5\" (UniqueName: \"kubernetes.io/projected/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-kube-api-access-qt4x5\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.117631 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.117648 4693 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.535072 4693 generic.go:334] "Generic (PLEG): container finished" podID="b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112" containerID="a3bcd1d51e62856ef6ca0239ee5c9ddf9d1488a0b7f21ca8e1262244a8738102" exitCode=0 Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.535123 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.535135 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" event={"ID":"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112","Type":"ContainerDied","Data":"a3bcd1d51e62856ef6ca0239ee5c9ddf9d1488a0b7f21ca8e1262244a8738102"} Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.536767 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6" event={"ID":"b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112","Type":"ContainerDied","Data":"4263cd488490f654e64a8b14ee486c715852b9bc053c67477c4c0c779eaaa1e4"} Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.536804 4693 scope.go:117] "RemoveContainer" containerID="a3bcd1d51e62856ef6ca0239ee5c9ddf9d1488a0b7f21ca8e1262244a8738102" Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.538502 4693 generic.go:334] "Generic (PLEG): container finished" podID="f8c75cc9-2bff-43c4-b8c8-838b67ea4874" containerID="439674ffc15f1bd26f24b53911af69afdd189f71348eaa1d5192ad169b85fb5a" exitCode=0 Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.538547 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.538561 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" event={"ID":"f8c75cc9-2bff-43c4-b8c8-838b67ea4874","Type":"ContainerDied","Data":"439674ffc15f1bd26f24b53911af69afdd189f71348eaa1d5192ad169b85fb5a"} Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.538594 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-88srp" event={"ID":"f8c75cc9-2bff-43c4-b8c8-838b67ea4874","Type":"ContainerDied","Data":"15f428c3542ed1bdcfed059d4f67cb580c94342d1506b301cc113e6190601c90"} Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.564861 4693 scope.go:117] "RemoveContainer" containerID="a3bcd1d51e62856ef6ca0239ee5c9ddf9d1488a0b7f21ca8e1262244a8738102" Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.564916 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-88srp"] Dec 12 15:51:31 crc kubenswrapper[4693]: E1212 15:51:31.565852 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3bcd1d51e62856ef6ca0239ee5c9ddf9d1488a0b7f21ca8e1262244a8738102\": container with ID starting with a3bcd1d51e62856ef6ca0239ee5c9ddf9d1488a0b7f21ca8e1262244a8738102 not found: ID does not exist" containerID="a3bcd1d51e62856ef6ca0239ee5c9ddf9d1488a0b7f21ca8e1262244a8738102" Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.565899 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3bcd1d51e62856ef6ca0239ee5c9ddf9d1488a0b7f21ca8e1262244a8738102"} err="failed to get container status \"a3bcd1d51e62856ef6ca0239ee5c9ddf9d1488a0b7f21ca8e1262244a8738102\": rpc error: code = NotFound desc = could not find container \"a3bcd1d51e62856ef6ca0239ee5c9ddf9d1488a0b7f21ca8e1262244a8738102\": container with ID starting with a3bcd1d51e62856ef6ca0239ee5c9ddf9d1488a0b7f21ca8e1262244a8738102 not found: ID does not exist" Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.565930 4693 scope.go:117] "RemoveContainer" containerID="439674ffc15f1bd26f24b53911af69afdd189f71348eaa1d5192ad169b85fb5a" Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.569857 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-88srp"] Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.576135 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6"] Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.579607 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzsn6"] Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.585545 4693 scope.go:117] "RemoveContainer" containerID="439674ffc15f1bd26f24b53911af69afdd189f71348eaa1d5192ad169b85fb5a" Dec 12 15:51:31 crc kubenswrapper[4693]: E1212 15:51:31.586074 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"439674ffc15f1bd26f24b53911af69afdd189f71348eaa1d5192ad169b85fb5a\": container with ID starting with 439674ffc15f1bd26f24b53911af69afdd189f71348eaa1d5192ad169b85fb5a not found: ID does not exist" containerID="439674ffc15f1bd26f24b53911af69afdd189f71348eaa1d5192ad169b85fb5a" Dec 12 15:51:31 crc kubenswrapper[4693]: I1212 15:51:31.586104 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"439674ffc15f1bd26f24b53911af69afdd189f71348eaa1d5192ad169b85fb5a"} err="failed to get container status \"439674ffc15f1bd26f24b53911af69afdd189f71348eaa1d5192ad169b85fb5a\": rpc error: code = NotFound desc = could not find container \"439674ffc15f1bd26f24b53911af69afdd189f71348eaa1d5192ad169b85fb5a\": container with ID starting with 439674ffc15f1bd26f24b53911af69afdd189f71348eaa1d5192ad169b85fb5a not found: ID does not exist" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.321459 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh"] Dec 12 15:51:32 crc kubenswrapper[4693]: E1212 15:51:32.321722 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8c75cc9-2bff-43c4-b8c8-838b67ea4874" containerName="controller-manager" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.321734 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8c75cc9-2bff-43c4-b8c8-838b67ea4874" containerName="controller-manager" Dec 12 15:51:32 crc kubenswrapper[4693]: E1212 15:51:32.321752 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.321758 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 12 15:51:32 crc kubenswrapper[4693]: E1212 15:51:32.321769 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112" containerName="route-controller-manager" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.321775 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112" containerName="route-controller-manager" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.321865 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112" containerName="route-controller-manager" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.321880 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.321891 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8c75cc9-2bff-43c4-b8c8-838b67ea4874" containerName="controller-manager" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.322280 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:32 crc kubenswrapper[4693]: W1212 15:51:32.336996 4693 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Dec 12 15:51:32 crc kubenswrapper[4693]: E1212 15:51:32.337057 4693 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 12 15:51:32 crc kubenswrapper[4693]: W1212 15:51:32.337119 4693 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Dec 12 15:51:32 crc kubenswrapper[4693]: E1212 15:51:32.337132 4693 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 12 15:51:32 crc kubenswrapper[4693]: W1212 15:51:32.337179 4693 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Dec 12 15:51:32 crc kubenswrapper[4693]: E1212 15:51:32.337196 4693 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 12 15:51:32 crc kubenswrapper[4693]: W1212 15:51:32.337239 4693 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Dec 12 15:51:32 crc kubenswrapper[4693]: E1212 15:51:32.337250 4693 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 12 15:51:32 crc kubenswrapper[4693]: W1212 15:51:32.337318 4693 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Dec 12 15:51:32 crc kubenswrapper[4693]: E1212 15:51:32.337336 4693 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 12 15:51:32 crc kubenswrapper[4693]: W1212 15:51:32.337448 4693 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Dec 12 15:51:32 crc kubenswrapper[4693]: E1212 15:51:32.337463 4693 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 12 15:51:32 crc kubenswrapper[4693]: W1212 15:51:32.340923 4693 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Dec 12 15:51:32 crc kubenswrapper[4693]: E1212 15:51:32.340955 4693 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.346364 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5"] Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.348322 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.356250 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.356338 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.356256 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.356567 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.356744 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.356930 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.369550 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh"] Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.369599 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5"] Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.435664 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-proxy-ca-bundles\") pod \"controller-manager-7bcf8f6dc9-mlpqh\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.435817 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-config\") pod \"controller-manager-7bcf8f6dc9-mlpqh\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.436380 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-client-ca\") pod \"controller-manager-7bcf8f6dc9-mlpqh\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.436431 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr7ww\" (UniqueName: \"kubernetes.io/projected/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-kube-api-access-hr7ww\") pod \"controller-manager-7bcf8f6dc9-mlpqh\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.436566 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-serving-cert\") pod \"controller-manager-7bcf8f6dc9-mlpqh\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.538229 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkbb6\" (UniqueName: \"kubernetes.io/projected/8c9028d3-03ad-4407-8c6d-2114ebb72b40-kube-api-access-mkbb6\") pod \"route-controller-manager-6587dd5957-dlsk5\" (UID: \"8c9028d3-03ad-4407-8c6d-2114ebb72b40\") " pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.538331 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-client-ca\") pod \"controller-manager-7bcf8f6dc9-mlpqh\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.538362 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr7ww\" (UniqueName: \"kubernetes.io/projected/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-kube-api-access-hr7ww\") pod \"controller-manager-7bcf8f6dc9-mlpqh\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.538388 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c9028d3-03ad-4407-8c6d-2114ebb72b40-serving-cert\") pod \"route-controller-manager-6587dd5957-dlsk5\" (UID: \"8c9028d3-03ad-4407-8c6d-2114ebb72b40\") " pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.538425 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-serving-cert\") pod \"controller-manager-7bcf8f6dc9-mlpqh\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.538462 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-proxy-ca-bundles\") pod \"controller-manager-7bcf8f6dc9-mlpqh\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.538492 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c9028d3-03ad-4407-8c6d-2114ebb72b40-client-ca\") pod \"route-controller-manager-6587dd5957-dlsk5\" (UID: \"8c9028d3-03ad-4407-8c6d-2114ebb72b40\") " pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.538533 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-config\") pod \"controller-manager-7bcf8f6dc9-mlpqh\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.538555 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c9028d3-03ad-4407-8c6d-2114ebb72b40-config\") pod \"route-controller-manager-6587dd5957-dlsk5\" (UID: \"8c9028d3-03ad-4407-8c6d-2114ebb72b40\") " pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.639000 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c9028d3-03ad-4407-8c6d-2114ebb72b40-client-ca\") pod \"route-controller-manager-6587dd5957-dlsk5\" (UID: \"8c9028d3-03ad-4407-8c6d-2114ebb72b40\") " pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.639050 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c9028d3-03ad-4407-8c6d-2114ebb72b40-config\") pod \"route-controller-manager-6587dd5957-dlsk5\" (UID: \"8c9028d3-03ad-4407-8c6d-2114ebb72b40\") " pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.639076 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkbb6\" (UniqueName: \"kubernetes.io/projected/8c9028d3-03ad-4407-8c6d-2114ebb72b40-kube-api-access-mkbb6\") pod \"route-controller-manager-6587dd5957-dlsk5\" (UID: \"8c9028d3-03ad-4407-8c6d-2114ebb72b40\") " pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.639119 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c9028d3-03ad-4407-8c6d-2114ebb72b40-serving-cert\") pod \"route-controller-manager-6587dd5957-dlsk5\" (UID: \"8c9028d3-03ad-4407-8c6d-2114ebb72b40\") " pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.639890 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c9028d3-03ad-4407-8c6d-2114ebb72b40-client-ca\") pod \"route-controller-manager-6587dd5957-dlsk5\" (UID: \"8c9028d3-03ad-4407-8c6d-2114ebb72b40\") " pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.641655 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c9028d3-03ad-4407-8c6d-2114ebb72b40-config\") pod \"route-controller-manager-6587dd5957-dlsk5\" (UID: \"8c9028d3-03ad-4407-8c6d-2114ebb72b40\") " pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.663651 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkbb6\" (UniqueName: \"kubernetes.io/projected/8c9028d3-03ad-4407-8c6d-2114ebb72b40-kube-api-access-mkbb6\") pod \"route-controller-manager-6587dd5957-dlsk5\" (UID: \"8c9028d3-03ad-4407-8c6d-2114ebb72b40\") " pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.666295 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c9028d3-03ad-4407-8c6d-2114ebb72b40-serving-cert\") pod \"route-controller-manager-6587dd5957-dlsk5\" (UID: \"8c9028d3-03ad-4407-8c6d-2114ebb72b40\") " pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.680784 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" Dec 12 15:51:32 crc kubenswrapper[4693]: I1212 15:51:32.871551 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5"] Dec 12 15:51:33 crc kubenswrapper[4693]: I1212 15:51:33.161314 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh"] Dec 12 15:51:33 crc kubenswrapper[4693]: E1212 15:51:33.161864 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-hr7ww proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" podUID="4c4e7329-4c9e-4745-9fc1-c7a87c29d63a" Dec 12 15:51:33 crc kubenswrapper[4693]: I1212 15:51:33.170862 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5"] Dec 12 15:51:33 crc kubenswrapper[4693]: I1212 15:51:33.297101 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 12 15:51:33 crc kubenswrapper[4693]: I1212 15:51:33.363503 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112" path="/var/lib/kubelet/pods/b7ad7cf3-6bf0-49aa-b6a3-308cbbb8e112/volumes" Dec 12 15:51:33 crc kubenswrapper[4693]: I1212 15:51:33.364363 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8c75cc9-2bff-43c4-b8c8-838b67ea4874" path="/var/lib/kubelet/pods/f8c75cc9-2bff-43c4-b8c8-838b67ea4874/volumes" Dec 12 15:51:33 crc kubenswrapper[4693]: I1212 15:51:33.438967 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 12 15:51:33 crc kubenswrapper[4693]: E1212 15:51:33.538972 4693 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:51:33 crc kubenswrapper[4693]: E1212 15:51:33.538996 4693 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:51:33 crc kubenswrapper[4693]: E1212 15:51:33.539003 4693 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 12 15:51:33 crc kubenswrapper[4693]: E1212 15:51:33.539063 4693 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Dec 12 15:51:33 crc kubenswrapper[4693]: E1212 15:51:33.539065 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-client-ca podName:4c4e7329-4c9e-4745-9fc1-c7a87c29d63a nodeName:}" failed. No retries permitted until 2025-12-12 15:51:34.039044995 +0000 UTC m=+321.207684586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-client-ca") pod "controller-manager-7bcf8f6dc9-mlpqh" (UID: "4c4e7329-4c9e-4745-9fc1-c7a87c29d63a") : failed to sync configmap cache: timed out waiting for the condition Dec 12 15:51:33 crc kubenswrapper[4693]: E1212 15:51:33.539166 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-config podName:4c4e7329-4c9e-4745-9fc1-c7a87c29d63a nodeName:}" failed. No retries permitted until 2025-12-12 15:51:34.039145558 +0000 UTC m=+321.207785159 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-config") pod "controller-manager-7bcf8f6dc9-mlpqh" (UID: "4c4e7329-4c9e-4745-9fc1-c7a87c29d63a") : failed to sync configmap cache: timed out waiting for the condition Dec 12 15:51:33 crc kubenswrapper[4693]: E1212 15:51:33.539186 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-serving-cert podName:4c4e7329-4c9e-4745-9fc1-c7a87c29d63a nodeName:}" failed. No retries permitted until 2025-12-12 15:51:34.039179399 +0000 UTC m=+321.207819000 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-serving-cert") pod "controller-manager-7bcf8f6dc9-mlpqh" (UID: "4c4e7329-4c9e-4745-9fc1-c7a87c29d63a") : failed to sync secret cache: timed out waiting for the condition Dec 12 15:51:33 crc kubenswrapper[4693]: E1212 15:51:33.539204 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-proxy-ca-bundles podName:4c4e7329-4c9e-4745-9fc1-c7a87c29d63a nodeName:}" failed. No retries permitted until 2025-12-12 15:51:34.039198539 +0000 UTC m=+321.207838250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-proxy-ca-bundles") pod "controller-manager-7bcf8f6dc9-mlpqh" (UID: "4c4e7329-4c9e-4745-9fc1-c7a87c29d63a") : failed to sync configmap cache: timed out waiting for the condition Dec 12 15:51:33 crc kubenswrapper[4693]: I1212 15:51:33.551726 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" event={"ID":"8c9028d3-03ad-4407-8c6d-2114ebb72b40","Type":"ContainerStarted","Data":"c65e88e007486e017a1a4a69713a547e32baf2d767ad29d5947a08acd3c3373d"} Dec 12 15:51:33 crc kubenswrapper[4693]: I1212 15:51:33.552068 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" Dec 12 15:51:33 crc kubenswrapper[4693]: I1212 15:51:33.552085 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" event={"ID":"8c9028d3-03ad-4407-8c6d-2114ebb72b40","Type":"ContainerStarted","Data":"ae61e82630224e53275fefbb2e9132c8a43dbe2e4b86163f88a23f7b50093969"} Dec 12 15:51:33 crc kubenswrapper[4693]: I1212 15:51:33.551744 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:33 crc kubenswrapper[4693]: I1212 15:51:33.560264 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:33 crc kubenswrapper[4693]: I1212 15:51:33.571807 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" podStartSLOduration=3.5717887 podStartE2EDuration="3.5717887s" podCreationTimestamp="2025-12-12 15:51:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:51:33.568097875 +0000 UTC m=+320.736737476" watchObservedRunningTime="2025-12-12 15:51:33.5717887 +0000 UTC m=+320.740428301" Dec 12 15:51:33 crc kubenswrapper[4693]: I1212 15:51:33.626620 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 12 15:51:33 crc kubenswrapper[4693]: I1212 15:51:33.636619 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" Dec 12 15:51:33 crc kubenswrapper[4693]: I1212 15:51:33.667347 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 12 15:51:33 crc kubenswrapper[4693]: I1212 15:51:33.729442 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 12 15:51:33 crc kubenswrapper[4693]: I1212 15:51:33.731247 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 12 15:51:33 crc kubenswrapper[4693]: I1212 15:51:33.755301 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr7ww\" (UniqueName: \"kubernetes.io/projected/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-kube-api-access-hr7ww\") pod \"controller-manager-7bcf8f6dc9-mlpqh\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:33 crc kubenswrapper[4693]: I1212 15:51:33.776331 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 12 15:51:33 crc kubenswrapper[4693]: I1212 15:51:33.866899 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr7ww\" (UniqueName: \"kubernetes.io/projected/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-kube-api-access-hr7ww\") pod \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " Dec 12 15:51:33 crc kubenswrapper[4693]: I1212 15:51:33.877521 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-kube-api-access-hr7ww" (OuterVolumeSpecName: "kube-api-access-hr7ww") pod "4c4e7329-4c9e-4745-9fc1-c7a87c29d63a" (UID: "4c4e7329-4c9e-4745-9fc1-c7a87c29d63a"). InnerVolumeSpecName "kube-api-access-hr7ww". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:51:33 crc kubenswrapper[4693]: I1212 15:51:33.969034 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr7ww\" (UniqueName: \"kubernetes.io/projected/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-kube-api-access-hr7ww\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.070364 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-client-ca\") pod \"controller-manager-7bcf8f6dc9-mlpqh\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.070437 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-serving-cert\") pod \"controller-manager-7bcf8f6dc9-mlpqh\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.070468 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-proxy-ca-bundles\") pod \"controller-manager-7bcf8f6dc9-mlpqh\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.070499 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-config\") pod \"controller-manager-7bcf8f6dc9-mlpqh\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.071649 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-proxy-ca-bundles\") pod \"controller-manager-7bcf8f6dc9-mlpqh\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.071689 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-client-ca\") pod \"controller-manager-7bcf8f6dc9-mlpqh\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.071808 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-config\") pod \"controller-manager-7bcf8f6dc9-mlpqh\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.078407 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-serving-cert\") pod \"controller-manager-7bcf8f6dc9-mlpqh\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.171559 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-config\") pod \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.171679 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-proxy-ca-bundles\") pod \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.171764 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-client-ca\") pod \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.171802 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-serving-cert\") pod \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\" (UID: \"4c4e7329-4c9e-4745-9fc1-c7a87c29d63a\") " Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.172431 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-config" (OuterVolumeSpecName: "config") pod "4c4e7329-4c9e-4745-9fc1-c7a87c29d63a" (UID: "4c4e7329-4c9e-4745-9fc1-c7a87c29d63a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.172600 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-client-ca" (OuterVolumeSpecName: "client-ca") pod "4c4e7329-4c9e-4745-9fc1-c7a87c29d63a" (UID: "4c4e7329-4c9e-4745-9fc1-c7a87c29d63a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.172629 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4c4e7329-4c9e-4745-9fc1-c7a87c29d63a" (UID: "4c4e7329-4c9e-4745-9fc1-c7a87c29d63a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.175340 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4c4e7329-4c9e-4745-9fc1-c7a87c29d63a" (UID: "4c4e7329-4c9e-4745-9fc1-c7a87c29d63a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.273372 4693 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.273408 4693 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.273421 4693 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.273432 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.556971 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.557193 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" podUID="8c9028d3-03ad-4407-8c6d-2114ebb72b40" containerName="route-controller-manager" containerID="cri-o://c65e88e007486e017a1a4a69713a547e32baf2d767ad29d5947a08acd3c3373d" gracePeriod=30 Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.631627 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh"] Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.637530 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6dd96d466b-n7trc"] Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.638457 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.640981 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.641124 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.641351 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.641460 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.641792 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.641822 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.645206 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7bcf8f6dc9-mlpqh"] Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.646945 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.650239 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dd96d466b-n7trc"] Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.678188 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9dgx\" (UniqueName: \"kubernetes.io/projected/d73652d2-0097-41aa-867f-332bba0d8f78-kube-api-access-j9dgx\") pod \"controller-manager-6dd96d466b-n7trc\" (UID: \"d73652d2-0097-41aa-867f-332bba0d8f78\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.678240 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d73652d2-0097-41aa-867f-332bba0d8f78-serving-cert\") pod \"controller-manager-6dd96d466b-n7trc\" (UID: \"d73652d2-0097-41aa-867f-332bba0d8f78\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.678300 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d73652d2-0097-41aa-867f-332bba0d8f78-proxy-ca-bundles\") pod \"controller-manager-6dd96d466b-n7trc\" (UID: \"d73652d2-0097-41aa-867f-332bba0d8f78\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.678353 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d73652d2-0097-41aa-867f-332bba0d8f78-client-ca\") pod \"controller-manager-6dd96d466b-n7trc\" (UID: \"d73652d2-0097-41aa-867f-332bba0d8f78\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.678376 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73652d2-0097-41aa-867f-332bba0d8f78-config\") pod \"controller-manager-6dd96d466b-n7trc\" (UID: \"d73652d2-0097-41aa-867f-332bba0d8f78\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.779011 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d73652d2-0097-41aa-867f-332bba0d8f78-client-ca\") pod \"controller-manager-6dd96d466b-n7trc\" (UID: \"d73652d2-0097-41aa-867f-332bba0d8f78\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.779059 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73652d2-0097-41aa-867f-332bba0d8f78-config\") pod \"controller-manager-6dd96d466b-n7trc\" (UID: \"d73652d2-0097-41aa-867f-332bba0d8f78\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.779124 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9dgx\" (UniqueName: \"kubernetes.io/projected/d73652d2-0097-41aa-867f-332bba0d8f78-kube-api-access-j9dgx\") pod \"controller-manager-6dd96d466b-n7trc\" (UID: \"d73652d2-0097-41aa-867f-332bba0d8f78\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.779156 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d73652d2-0097-41aa-867f-332bba0d8f78-serving-cert\") pod \"controller-manager-6dd96d466b-n7trc\" (UID: \"d73652d2-0097-41aa-867f-332bba0d8f78\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.779190 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d73652d2-0097-41aa-867f-332bba0d8f78-proxy-ca-bundles\") pod \"controller-manager-6dd96d466b-n7trc\" (UID: \"d73652d2-0097-41aa-867f-332bba0d8f78\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.780402 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73652d2-0097-41aa-867f-332bba0d8f78-config\") pod \"controller-manager-6dd96d466b-n7trc\" (UID: \"d73652d2-0097-41aa-867f-332bba0d8f78\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.780947 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d73652d2-0097-41aa-867f-332bba0d8f78-client-ca\") pod \"controller-manager-6dd96d466b-n7trc\" (UID: \"d73652d2-0097-41aa-867f-332bba0d8f78\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.781575 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d73652d2-0097-41aa-867f-332bba0d8f78-proxy-ca-bundles\") pod \"controller-manager-6dd96d466b-n7trc\" (UID: \"d73652d2-0097-41aa-867f-332bba0d8f78\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.787201 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d73652d2-0097-41aa-867f-332bba0d8f78-serving-cert\") pod \"controller-manager-6dd96d466b-n7trc\" (UID: \"d73652d2-0097-41aa-867f-332bba0d8f78\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.797230 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9dgx\" (UniqueName: \"kubernetes.io/projected/d73652d2-0097-41aa-867f-332bba0d8f78-kube-api-access-j9dgx\") pod \"controller-manager-6dd96d466b-n7trc\" (UID: \"d73652d2-0097-41aa-867f-332bba0d8f78\") " pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.868236 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.982191 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c9028d3-03ad-4407-8c6d-2114ebb72b40-serving-cert\") pod \"8c9028d3-03ad-4407-8c6d-2114ebb72b40\" (UID: \"8c9028d3-03ad-4407-8c6d-2114ebb72b40\") " Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.982273 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c9028d3-03ad-4407-8c6d-2114ebb72b40-client-ca\") pod \"8c9028d3-03ad-4407-8c6d-2114ebb72b40\" (UID: \"8c9028d3-03ad-4407-8c6d-2114ebb72b40\") " Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.982309 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkbb6\" (UniqueName: \"kubernetes.io/projected/8c9028d3-03ad-4407-8c6d-2114ebb72b40-kube-api-access-mkbb6\") pod \"8c9028d3-03ad-4407-8c6d-2114ebb72b40\" (UID: \"8c9028d3-03ad-4407-8c6d-2114ebb72b40\") " Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.982353 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c9028d3-03ad-4407-8c6d-2114ebb72b40-config\") pod \"8c9028d3-03ad-4407-8c6d-2114ebb72b40\" (UID: \"8c9028d3-03ad-4407-8c6d-2114ebb72b40\") " Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.982754 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c9028d3-03ad-4407-8c6d-2114ebb72b40-client-ca" (OuterVolumeSpecName: "client-ca") pod "8c9028d3-03ad-4407-8c6d-2114ebb72b40" (UID: "8c9028d3-03ad-4407-8c6d-2114ebb72b40"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.982861 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c9028d3-03ad-4407-8c6d-2114ebb72b40-config" (OuterVolumeSpecName: "config") pod "8c9028d3-03ad-4407-8c6d-2114ebb72b40" (UID: "8c9028d3-03ad-4407-8c6d-2114ebb72b40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.985341 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c9028d3-03ad-4407-8c6d-2114ebb72b40-kube-api-access-mkbb6" (OuterVolumeSpecName: "kube-api-access-mkbb6") pod "8c9028d3-03ad-4407-8c6d-2114ebb72b40" (UID: "8c9028d3-03ad-4407-8c6d-2114ebb72b40"). InnerVolumeSpecName "kube-api-access-mkbb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:51:34 crc kubenswrapper[4693]: I1212 15:51:34.985539 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c9028d3-03ad-4407-8c6d-2114ebb72b40-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8c9028d3-03ad-4407-8c6d-2114ebb72b40" (UID: "8c9028d3-03ad-4407-8c6d-2114ebb72b40"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:51:35 crc kubenswrapper[4693]: I1212 15:51:35.006249 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" Dec 12 15:51:35 crc kubenswrapper[4693]: I1212 15:51:35.085592 4693 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c9028d3-03ad-4407-8c6d-2114ebb72b40-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:35 crc kubenswrapper[4693]: I1212 15:51:35.085632 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkbb6\" (UniqueName: \"kubernetes.io/projected/8c9028d3-03ad-4407-8c6d-2114ebb72b40-kube-api-access-mkbb6\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:35 crc kubenswrapper[4693]: I1212 15:51:35.085652 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c9028d3-03ad-4407-8c6d-2114ebb72b40-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:35 crc kubenswrapper[4693]: I1212 15:51:35.085669 4693 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c9028d3-03ad-4407-8c6d-2114ebb72b40-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:35 crc kubenswrapper[4693]: I1212 15:51:35.366369 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c4e7329-4c9e-4745-9fc1-c7a87c29d63a" path="/var/lib/kubelet/pods/4c4e7329-4c9e-4745-9fc1-c7a87c29d63a/volumes" Dec 12 15:51:35 crc kubenswrapper[4693]: I1212 15:51:35.420656 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dd96d466b-n7trc"] Dec 12 15:51:35 crc kubenswrapper[4693]: I1212 15:51:35.565828 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" event={"ID":"d73652d2-0097-41aa-867f-332bba0d8f78","Type":"ContainerStarted","Data":"4c7257e8f915f7bbef581c8a2c68d3cdf7df6fdc6427b7adfb90c5ab41172a65"} Dec 12 15:51:35 crc kubenswrapper[4693]: I1212 15:51:35.565933 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" Dec 12 15:51:35 crc kubenswrapper[4693]: I1212 15:51:35.565952 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" event={"ID":"d73652d2-0097-41aa-867f-332bba0d8f78","Type":"ContainerStarted","Data":"71c9482eb1badec39e6184b21fde1ffc39f667dcca8bc7d13b75536da0dfb8f9"} Dec 12 15:51:35 crc kubenswrapper[4693]: I1212 15:51:35.567338 4693 patch_prober.go:28] interesting pod/controller-manager-6dd96d466b-n7trc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Dec 12 15:51:35 crc kubenswrapper[4693]: I1212 15:51:35.567402 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" podUID="d73652d2-0097-41aa-867f-332bba0d8f78" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Dec 12 15:51:35 crc kubenswrapper[4693]: I1212 15:51:35.568653 4693 generic.go:334] "Generic (PLEG): container finished" podID="8c9028d3-03ad-4407-8c6d-2114ebb72b40" containerID="c65e88e007486e017a1a4a69713a547e32baf2d767ad29d5947a08acd3c3373d" exitCode=0 Dec 12 15:51:35 crc kubenswrapper[4693]: I1212 15:51:35.568707 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" event={"ID":"8c9028d3-03ad-4407-8c6d-2114ebb72b40","Type":"ContainerDied","Data":"c65e88e007486e017a1a4a69713a547e32baf2d767ad29d5947a08acd3c3373d"} Dec 12 15:51:35 crc kubenswrapper[4693]: I1212 15:51:35.568740 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" event={"ID":"8c9028d3-03ad-4407-8c6d-2114ebb72b40","Type":"ContainerDied","Data":"ae61e82630224e53275fefbb2e9132c8a43dbe2e4b86163f88a23f7b50093969"} Dec 12 15:51:35 crc kubenswrapper[4693]: I1212 15:51:35.568711 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5" Dec 12 15:51:35 crc kubenswrapper[4693]: I1212 15:51:35.568761 4693 scope.go:117] "RemoveContainer" containerID="c65e88e007486e017a1a4a69713a547e32baf2d767ad29d5947a08acd3c3373d" Dec 12 15:51:35 crc kubenswrapper[4693]: I1212 15:51:35.589742 4693 scope.go:117] "RemoveContainer" containerID="c65e88e007486e017a1a4a69713a547e32baf2d767ad29d5947a08acd3c3373d" Dec 12 15:51:35 crc kubenswrapper[4693]: E1212 15:51:35.590485 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c65e88e007486e017a1a4a69713a547e32baf2d767ad29d5947a08acd3c3373d\": container with ID starting with c65e88e007486e017a1a4a69713a547e32baf2d767ad29d5947a08acd3c3373d not found: ID does not exist" containerID="c65e88e007486e017a1a4a69713a547e32baf2d767ad29d5947a08acd3c3373d" Dec 12 15:51:35 crc kubenswrapper[4693]: I1212 15:51:35.590534 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c65e88e007486e017a1a4a69713a547e32baf2d767ad29d5947a08acd3c3373d"} err="failed to get container status \"c65e88e007486e017a1a4a69713a547e32baf2d767ad29d5947a08acd3c3373d\": rpc error: code = NotFound desc = could not find container \"c65e88e007486e017a1a4a69713a547e32baf2d767ad29d5947a08acd3c3373d\": container with ID starting with c65e88e007486e017a1a4a69713a547e32baf2d767ad29d5947a08acd3c3373d not found: ID does not exist" Dec 12 15:51:35 crc kubenswrapper[4693]: I1212 15:51:35.602478 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" podStartSLOduration=2.602455045 podStartE2EDuration="2.602455045s" podCreationTimestamp="2025-12-12 15:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:51:35.587773666 +0000 UTC m=+322.756413297" watchObservedRunningTime="2025-12-12 15:51:35.602455045 +0000 UTC m=+322.771094646" Dec 12 15:51:35 crc kubenswrapper[4693]: I1212 15:51:35.603422 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5"] Dec 12 15:51:35 crc kubenswrapper[4693]: I1212 15:51:35.606877 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6587dd5957-dlsk5"] Dec 12 15:51:36 crc kubenswrapper[4693]: I1212 15:51:36.581435 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.326971 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z"] Dec 12 15:51:37 crc kubenswrapper[4693]: E1212 15:51:37.327200 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c9028d3-03ad-4407-8c6d-2114ebb72b40" containerName="route-controller-manager" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.327211 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c9028d3-03ad-4407-8c6d-2114ebb72b40" containerName="route-controller-manager" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.327319 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c9028d3-03ad-4407-8c6d-2114ebb72b40" containerName="route-controller-manager" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.327666 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.330174 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.330264 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.330310 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.330606 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.330724 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.331906 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.339941 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z"] Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.367000 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c9028d3-03ad-4407-8c6d-2114ebb72b40" path="/var/lib/kubelet/pods/8c9028d3-03ad-4407-8c6d-2114ebb72b40/volumes" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.422485 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7490206-a510-4180-baeb-1064f8536458-client-ca\") pod \"route-controller-manager-6bb87b5856-vz82z\" (UID: \"d7490206-a510-4180-baeb-1064f8536458\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.422572 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7490206-a510-4180-baeb-1064f8536458-serving-cert\") pod \"route-controller-manager-6bb87b5856-vz82z\" (UID: \"d7490206-a510-4180-baeb-1064f8536458\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.422650 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz9tn\" (UniqueName: \"kubernetes.io/projected/d7490206-a510-4180-baeb-1064f8536458-kube-api-access-rz9tn\") pod \"route-controller-manager-6bb87b5856-vz82z\" (UID: \"d7490206-a510-4180-baeb-1064f8536458\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.422776 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7490206-a510-4180-baeb-1064f8536458-config\") pod \"route-controller-manager-6bb87b5856-vz82z\" (UID: \"d7490206-a510-4180-baeb-1064f8536458\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.523595 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7490206-a510-4180-baeb-1064f8536458-serving-cert\") pod \"route-controller-manager-6bb87b5856-vz82z\" (UID: \"d7490206-a510-4180-baeb-1064f8536458\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.523665 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz9tn\" (UniqueName: \"kubernetes.io/projected/d7490206-a510-4180-baeb-1064f8536458-kube-api-access-rz9tn\") pod \"route-controller-manager-6bb87b5856-vz82z\" (UID: \"d7490206-a510-4180-baeb-1064f8536458\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.523723 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7490206-a510-4180-baeb-1064f8536458-config\") pod \"route-controller-manager-6bb87b5856-vz82z\" (UID: \"d7490206-a510-4180-baeb-1064f8536458\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.523783 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7490206-a510-4180-baeb-1064f8536458-client-ca\") pod \"route-controller-manager-6bb87b5856-vz82z\" (UID: \"d7490206-a510-4180-baeb-1064f8536458\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.524598 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7490206-a510-4180-baeb-1064f8536458-client-ca\") pod \"route-controller-manager-6bb87b5856-vz82z\" (UID: \"d7490206-a510-4180-baeb-1064f8536458\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.524904 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7490206-a510-4180-baeb-1064f8536458-config\") pod \"route-controller-manager-6bb87b5856-vz82z\" (UID: \"d7490206-a510-4180-baeb-1064f8536458\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.537416 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7490206-a510-4180-baeb-1064f8536458-serving-cert\") pod \"route-controller-manager-6bb87b5856-vz82z\" (UID: \"d7490206-a510-4180-baeb-1064f8536458\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.541820 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz9tn\" (UniqueName: \"kubernetes.io/projected/d7490206-a510-4180-baeb-1064f8536458-kube-api-access-rz9tn\") pod \"route-controller-manager-6bb87b5856-vz82z\" (UID: \"d7490206-a510-4180-baeb-1064f8536458\") " pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.647661 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" Dec 12 15:51:37 crc kubenswrapper[4693]: I1212 15:51:37.881689 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z"] Dec 12 15:51:38 crc kubenswrapper[4693]: I1212 15:51:38.589306 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" event={"ID":"d7490206-a510-4180-baeb-1064f8536458","Type":"ContainerStarted","Data":"e19a92003e74e97a2390b61995fb826d311f4b30490f75c208daf86ed09cbb9c"} Dec 12 15:51:38 crc kubenswrapper[4693]: I1212 15:51:38.589922 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" event={"ID":"d7490206-a510-4180-baeb-1064f8536458","Type":"ContainerStarted","Data":"c17a2b8e6215ee106455ce13fcc890b25983c7590171ca5858a894d7d696fc25"} Dec 12 15:51:38 crc kubenswrapper[4693]: I1212 15:51:38.589947 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" Dec 12 15:51:38 crc kubenswrapper[4693]: I1212 15:51:38.595336 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" Dec 12 15:51:38 crc kubenswrapper[4693]: I1212 15:51:38.612921 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" podStartSLOduration=5.612888764 podStartE2EDuration="5.612888764s" podCreationTimestamp="2025-12-12 15:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:51:38.609353773 +0000 UTC m=+325.777993374" watchObservedRunningTime="2025-12-12 15:51:38.612888764 +0000 UTC m=+325.781528375" Dec 12 15:51:50 crc kubenswrapper[4693]: I1212 15:51:50.488742 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dd96d466b-n7trc"] Dec 12 15:51:50 crc kubenswrapper[4693]: I1212 15:51:50.489595 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" podUID="d73652d2-0097-41aa-867f-332bba0d8f78" containerName="controller-manager" containerID="cri-o://4c7257e8f915f7bbef581c8a2c68d3cdf7df6fdc6427b7adfb90c5ab41172a65" gracePeriod=30 Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.540301 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.566754 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6964955f74-9kcjr"] Dec 12 15:51:51 crc kubenswrapper[4693]: E1212 15:51:51.567007 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d73652d2-0097-41aa-867f-332bba0d8f78" containerName="controller-manager" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.567023 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="d73652d2-0097-41aa-867f-332bba0d8f78" containerName="controller-manager" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.567119 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="d73652d2-0097-41aa-867f-332bba0d8f78" containerName="controller-manager" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.567535 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.614532 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6964955f74-9kcjr"] Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.625864 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73652d2-0097-41aa-867f-332bba0d8f78-config\") pod \"d73652d2-0097-41aa-867f-332bba0d8f78\" (UID: \"d73652d2-0097-41aa-867f-332bba0d8f78\") " Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.625905 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d73652d2-0097-41aa-867f-332bba0d8f78-serving-cert\") pod \"d73652d2-0097-41aa-867f-332bba0d8f78\" (UID: \"d73652d2-0097-41aa-867f-332bba0d8f78\") " Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.625970 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d73652d2-0097-41aa-867f-332bba0d8f78-proxy-ca-bundles\") pod \"d73652d2-0097-41aa-867f-332bba0d8f78\" (UID: \"d73652d2-0097-41aa-867f-332bba0d8f78\") " Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.626010 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d73652d2-0097-41aa-867f-332bba0d8f78-client-ca\") pod \"d73652d2-0097-41aa-867f-332bba0d8f78\" (UID: \"d73652d2-0097-41aa-867f-332bba0d8f78\") " Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.626042 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9dgx\" (UniqueName: \"kubernetes.io/projected/d73652d2-0097-41aa-867f-332bba0d8f78-kube-api-access-j9dgx\") pod \"d73652d2-0097-41aa-867f-332bba0d8f78\" (UID: \"d73652d2-0097-41aa-867f-332bba0d8f78\") " Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.626212 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4741216c-0a8d-4079-b459-cb459dc4f5b3-client-ca\") pod \"controller-manager-6964955f74-9kcjr\" (UID: \"4741216c-0a8d-4079-b459-cb459dc4f5b3\") " pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.626247 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4741216c-0a8d-4079-b459-cb459dc4f5b3-proxy-ca-bundles\") pod \"controller-manager-6964955f74-9kcjr\" (UID: \"4741216c-0a8d-4079-b459-cb459dc4f5b3\") " pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.626312 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqpx8\" (UniqueName: \"kubernetes.io/projected/4741216c-0a8d-4079-b459-cb459dc4f5b3-kube-api-access-dqpx8\") pod \"controller-manager-6964955f74-9kcjr\" (UID: \"4741216c-0a8d-4079-b459-cb459dc4f5b3\") " pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.626363 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4741216c-0a8d-4079-b459-cb459dc4f5b3-config\") pod \"controller-manager-6964955f74-9kcjr\" (UID: \"4741216c-0a8d-4079-b459-cb459dc4f5b3\") " pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.626411 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4741216c-0a8d-4079-b459-cb459dc4f5b3-serving-cert\") pod \"controller-manager-6964955f74-9kcjr\" (UID: \"4741216c-0a8d-4079-b459-cb459dc4f5b3\") " pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.626832 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d73652d2-0097-41aa-867f-332bba0d8f78-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d73652d2-0097-41aa-867f-332bba0d8f78" (UID: "d73652d2-0097-41aa-867f-332bba0d8f78"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.626871 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d73652d2-0097-41aa-867f-332bba0d8f78-client-ca" (OuterVolumeSpecName: "client-ca") pod "d73652d2-0097-41aa-867f-332bba0d8f78" (UID: "d73652d2-0097-41aa-867f-332bba0d8f78"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.626946 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d73652d2-0097-41aa-867f-332bba0d8f78-config" (OuterVolumeSpecName: "config") pod "d73652d2-0097-41aa-867f-332bba0d8f78" (UID: "d73652d2-0097-41aa-867f-332bba0d8f78"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.631409 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d73652d2-0097-41aa-867f-332bba0d8f78-kube-api-access-j9dgx" (OuterVolumeSpecName: "kube-api-access-j9dgx") pod "d73652d2-0097-41aa-867f-332bba0d8f78" (UID: "d73652d2-0097-41aa-867f-332bba0d8f78"). InnerVolumeSpecName "kube-api-access-j9dgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.640057 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d73652d2-0097-41aa-867f-332bba0d8f78-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d73652d2-0097-41aa-867f-332bba0d8f78" (UID: "d73652d2-0097-41aa-867f-332bba0d8f78"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.658845 4693 generic.go:334] "Generic (PLEG): container finished" podID="d73652d2-0097-41aa-867f-332bba0d8f78" containerID="4c7257e8f915f7bbef581c8a2c68d3cdf7df6fdc6427b7adfb90c5ab41172a65" exitCode=0 Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.658894 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" event={"ID":"d73652d2-0097-41aa-867f-332bba0d8f78","Type":"ContainerDied","Data":"4c7257e8f915f7bbef581c8a2c68d3cdf7df6fdc6427b7adfb90c5ab41172a65"} Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.658907 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.658937 4693 scope.go:117] "RemoveContainer" containerID="4c7257e8f915f7bbef581c8a2c68d3cdf7df6fdc6427b7adfb90c5ab41172a65" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.658924 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dd96d466b-n7trc" event={"ID":"d73652d2-0097-41aa-867f-332bba0d8f78","Type":"ContainerDied","Data":"71c9482eb1badec39e6184b21fde1ffc39f667dcca8bc7d13b75536da0dfb8f9"} Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.677489 4693 scope.go:117] "RemoveContainer" containerID="4c7257e8f915f7bbef581c8a2c68d3cdf7df6fdc6427b7adfb90c5ab41172a65" Dec 12 15:51:51 crc kubenswrapper[4693]: E1212 15:51:51.678221 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c7257e8f915f7bbef581c8a2c68d3cdf7df6fdc6427b7adfb90c5ab41172a65\": container with ID starting with 4c7257e8f915f7bbef581c8a2c68d3cdf7df6fdc6427b7adfb90c5ab41172a65 not found: ID does not exist" containerID="4c7257e8f915f7bbef581c8a2c68d3cdf7df6fdc6427b7adfb90c5ab41172a65" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.678264 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c7257e8f915f7bbef581c8a2c68d3cdf7df6fdc6427b7adfb90c5ab41172a65"} err="failed to get container status \"4c7257e8f915f7bbef581c8a2c68d3cdf7df6fdc6427b7adfb90c5ab41172a65\": rpc error: code = NotFound desc = could not find container \"4c7257e8f915f7bbef581c8a2c68d3cdf7df6fdc6427b7adfb90c5ab41172a65\": container with ID starting with 4c7257e8f915f7bbef581c8a2c68d3cdf7df6fdc6427b7adfb90c5ab41172a65 not found: ID does not exist" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.688694 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dd96d466b-n7trc"] Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.693310 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6dd96d466b-n7trc"] Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.727118 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4741216c-0a8d-4079-b459-cb459dc4f5b3-config\") pod \"controller-manager-6964955f74-9kcjr\" (UID: \"4741216c-0a8d-4079-b459-cb459dc4f5b3\") " pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.727196 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4741216c-0a8d-4079-b459-cb459dc4f5b3-serving-cert\") pod \"controller-manager-6964955f74-9kcjr\" (UID: \"4741216c-0a8d-4079-b459-cb459dc4f5b3\") " pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.727229 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4741216c-0a8d-4079-b459-cb459dc4f5b3-client-ca\") pod \"controller-manager-6964955f74-9kcjr\" (UID: \"4741216c-0a8d-4079-b459-cb459dc4f5b3\") " pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.727249 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4741216c-0a8d-4079-b459-cb459dc4f5b3-proxy-ca-bundles\") pod \"controller-manager-6964955f74-9kcjr\" (UID: \"4741216c-0a8d-4079-b459-cb459dc4f5b3\") " pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.727287 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqpx8\" (UniqueName: \"kubernetes.io/projected/4741216c-0a8d-4079-b459-cb459dc4f5b3-kube-api-access-dqpx8\") pod \"controller-manager-6964955f74-9kcjr\" (UID: \"4741216c-0a8d-4079-b459-cb459dc4f5b3\") " pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.727340 4693 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d73652d2-0097-41aa-867f-332bba0d8f78-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.727360 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9dgx\" (UniqueName: \"kubernetes.io/projected/d73652d2-0097-41aa-867f-332bba0d8f78-kube-api-access-j9dgx\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.727374 4693 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d73652d2-0097-41aa-867f-332bba0d8f78-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.727386 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73652d2-0097-41aa-867f-332bba0d8f78-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.727397 4693 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d73652d2-0097-41aa-867f-332bba0d8f78-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.728245 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4741216c-0a8d-4079-b459-cb459dc4f5b3-client-ca\") pod \"controller-manager-6964955f74-9kcjr\" (UID: \"4741216c-0a8d-4079-b459-cb459dc4f5b3\") " pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.728802 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4741216c-0a8d-4079-b459-cb459dc4f5b3-config\") pod \"controller-manager-6964955f74-9kcjr\" (UID: \"4741216c-0a8d-4079-b459-cb459dc4f5b3\") " pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.729101 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4741216c-0a8d-4079-b459-cb459dc4f5b3-proxy-ca-bundles\") pod \"controller-manager-6964955f74-9kcjr\" (UID: \"4741216c-0a8d-4079-b459-cb459dc4f5b3\") " pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.731657 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4741216c-0a8d-4079-b459-cb459dc4f5b3-serving-cert\") pod \"controller-manager-6964955f74-9kcjr\" (UID: \"4741216c-0a8d-4079-b459-cb459dc4f5b3\") " pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.744364 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqpx8\" (UniqueName: \"kubernetes.io/projected/4741216c-0a8d-4079-b459-cb459dc4f5b3-kube-api-access-dqpx8\") pod \"controller-manager-6964955f74-9kcjr\" (UID: \"4741216c-0a8d-4079-b459-cb459dc4f5b3\") " pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" Dec 12 15:51:51 crc kubenswrapper[4693]: I1212 15:51:51.881780 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" Dec 12 15:51:52 crc kubenswrapper[4693]: I1212 15:51:52.081334 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6964955f74-9kcjr"] Dec 12 15:51:52 crc kubenswrapper[4693]: I1212 15:51:52.666739 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" event={"ID":"4741216c-0a8d-4079-b459-cb459dc4f5b3","Type":"ContainerStarted","Data":"596d34b0d0c5f7955cda287604606de90aad230ea92bd147f6351762f75fc764"} Dec 12 15:51:52 crc kubenswrapper[4693]: I1212 15:51:52.667187 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" event={"ID":"4741216c-0a8d-4079-b459-cb459dc4f5b3","Type":"ContainerStarted","Data":"08727b9d5b23b42b73388c1bc0385cc9eab338af1b9f3f20689c454d2ddc3ed5"} Dec 12 15:51:52 crc kubenswrapper[4693]: I1212 15:51:52.667208 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" Dec 12 15:51:52 crc kubenswrapper[4693]: I1212 15:51:52.681494 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" Dec 12 15:51:52 crc kubenswrapper[4693]: I1212 15:51:52.687398 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" podStartSLOduration=2.687372579 podStartE2EDuration="2.687372579s" podCreationTimestamp="2025-12-12 15:51:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:51:52.682916924 +0000 UTC m=+339.851556535" watchObservedRunningTime="2025-12-12 15:51:52.687372579 +0000 UTC m=+339.856012200" Dec 12 15:51:53 crc kubenswrapper[4693]: I1212 15:51:53.363300 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d73652d2-0097-41aa-867f-332bba0d8f78" path="/var/lib/kubelet/pods/d73652d2-0097-41aa-867f-332bba0d8f78/volumes" Dec 12 15:51:55 crc kubenswrapper[4693]: I1212 15:51:55.380778 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ctq58"] Dec 12 15:51:55 crc kubenswrapper[4693]: I1212 15:51:55.381498 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ctq58" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" containerName="registry-server" containerID="cri-o://a6e0d54ef189f3fe3850813ef14d07bb6ad944d2080e313f87c5909d94981054" gracePeriod=2 Dec 12 15:51:55 crc kubenswrapper[4693]: I1212 15:51:55.689883 4693 generic.go:334] "Generic (PLEG): container finished" podID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" containerID="a6e0d54ef189f3fe3850813ef14d07bb6ad944d2080e313f87c5909d94981054" exitCode=0 Dec 12 15:51:55 crc kubenswrapper[4693]: I1212 15:51:55.690014 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctq58" event={"ID":"77421421-26f5-4e9a-8857-bd1f5a9d8fa9","Type":"ContainerDied","Data":"a6e0d54ef189f3fe3850813ef14d07bb6ad944d2080e313f87c5909d94981054"} Dec 12 15:51:55 crc kubenswrapper[4693]: I1212 15:51:55.825123 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ctq58" Dec 12 15:51:55 crc kubenswrapper[4693]: I1212 15:51:55.899704 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77421421-26f5-4e9a-8857-bd1f5a9d8fa9-utilities\") pod \"77421421-26f5-4e9a-8857-bd1f5a9d8fa9\" (UID: \"77421421-26f5-4e9a-8857-bd1f5a9d8fa9\") " Dec 12 15:51:55 crc kubenswrapper[4693]: I1212 15:51:55.899818 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86drp\" (UniqueName: \"kubernetes.io/projected/77421421-26f5-4e9a-8857-bd1f5a9d8fa9-kube-api-access-86drp\") pod \"77421421-26f5-4e9a-8857-bd1f5a9d8fa9\" (UID: \"77421421-26f5-4e9a-8857-bd1f5a9d8fa9\") " Dec 12 15:51:55 crc kubenswrapper[4693]: I1212 15:51:55.899874 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77421421-26f5-4e9a-8857-bd1f5a9d8fa9-catalog-content\") pod \"77421421-26f5-4e9a-8857-bd1f5a9d8fa9\" (UID: \"77421421-26f5-4e9a-8857-bd1f5a9d8fa9\") " Dec 12 15:51:55 crc kubenswrapper[4693]: I1212 15:51:55.900545 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77421421-26f5-4e9a-8857-bd1f5a9d8fa9-utilities" (OuterVolumeSpecName: "utilities") pod "77421421-26f5-4e9a-8857-bd1f5a9d8fa9" (UID: "77421421-26f5-4e9a-8857-bd1f5a9d8fa9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:51:55 crc kubenswrapper[4693]: I1212 15:51:55.919506 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77421421-26f5-4e9a-8857-bd1f5a9d8fa9-kube-api-access-86drp" (OuterVolumeSpecName: "kube-api-access-86drp") pod "77421421-26f5-4e9a-8857-bd1f5a9d8fa9" (UID: "77421421-26f5-4e9a-8857-bd1f5a9d8fa9"). InnerVolumeSpecName "kube-api-access-86drp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:51:55 crc kubenswrapper[4693]: I1212 15:51:55.957507 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77421421-26f5-4e9a-8857-bd1f5a9d8fa9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77421421-26f5-4e9a-8857-bd1f5a9d8fa9" (UID: "77421421-26f5-4e9a-8857-bd1f5a9d8fa9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:51:56 crc kubenswrapper[4693]: I1212 15:51:56.001627 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77421421-26f5-4e9a-8857-bd1f5a9d8fa9-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:56 crc kubenswrapper[4693]: I1212 15:51:56.001674 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86drp\" (UniqueName: \"kubernetes.io/projected/77421421-26f5-4e9a-8857-bd1f5a9d8fa9-kube-api-access-86drp\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:56 crc kubenswrapper[4693]: I1212 15:51:56.001692 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77421421-26f5-4e9a-8857-bd1f5a9d8fa9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:56 crc kubenswrapper[4693]: I1212 15:51:56.698480 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctq58" event={"ID":"77421421-26f5-4e9a-8857-bd1f5a9d8fa9","Type":"ContainerDied","Data":"76ad148b5959544971c0eacc7f81a3e9011b261f9896d241eb07947ded9c6ba8"} Dec 12 15:51:56 crc kubenswrapper[4693]: I1212 15:51:56.698530 4693 scope.go:117] "RemoveContainer" containerID="a6e0d54ef189f3fe3850813ef14d07bb6ad944d2080e313f87c5909d94981054" Dec 12 15:51:56 crc kubenswrapper[4693]: I1212 15:51:56.698549 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ctq58" Dec 12 15:51:56 crc kubenswrapper[4693]: I1212 15:51:56.717025 4693 scope.go:117] "RemoveContainer" containerID="7c576c29b8d074a0dea628a11853602826fa4335cbec1543b0ce6264e9a10592" Dec 12 15:51:56 crc kubenswrapper[4693]: I1212 15:51:56.737371 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ctq58"] Dec 12 15:51:56 crc kubenswrapper[4693]: I1212 15:51:56.745262 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ctq58"] Dec 12 15:51:56 crc kubenswrapper[4693]: I1212 15:51:56.752624 4693 scope.go:117] "RemoveContainer" containerID="6caef5a4390a4072bc15cf605024f0078524b2cc6a334bd0b5f50f1554241b7a" Dec 12 15:51:56 crc kubenswrapper[4693]: E1212 15:51:56.783791 4693 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77421421_26f5_4e9a_8857_bd1f5a9d8fa9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77421421_26f5_4e9a_8857_bd1f5a9d8fa9.slice/crio-76ad148b5959544971c0eacc7f81a3e9011b261f9896d241eb07947ded9c6ba8\": RecentStats: unable to find data in memory cache]" Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.181320 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb99c"] Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.183498 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kb99c" podUID="58415397-b1c4-41c4-abd4-518a27eda647" containerName="registry-server" containerID="cri-o://1714907fe89887ef24be8084bdf19db55965350dcbbb5bb55acdbcca9ab041f0" gracePeriod=2 Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.364294 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" path="/var/lib/kubelet/pods/77421421-26f5-4e9a-8857-bd1f5a9d8fa9/volumes" Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.622704 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kb99c" Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.708920 4693 generic.go:334] "Generic (PLEG): container finished" podID="58415397-b1c4-41c4-abd4-518a27eda647" containerID="1714907fe89887ef24be8084bdf19db55965350dcbbb5bb55acdbcca9ab041f0" exitCode=0 Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.708956 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb99c" event={"ID":"58415397-b1c4-41c4-abd4-518a27eda647","Type":"ContainerDied","Data":"1714907fe89887ef24be8084bdf19db55965350dcbbb5bb55acdbcca9ab041f0"} Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.708978 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb99c" event={"ID":"58415397-b1c4-41c4-abd4-518a27eda647","Type":"ContainerDied","Data":"a5a5de1916612650b0d5d50a79ba79ca3d62c4af01263434a9461a5232c052e9"} Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.708995 4693 scope.go:117] "RemoveContainer" containerID="1714907fe89887ef24be8084bdf19db55965350dcbbb5bb55acdbcca9ab041f0" Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.709004 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kb99c" Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.721855 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdsw9\" (UniqueName: \"kubernetes.io/projected/58415397-b1c4-41c4-abd4-518a27eda647-kube-api-access-fdsw9\") pod \"58415397-b1c4-41c4-abd4-518a27eda647\" (UID: \"58415397-b1c4-41c4-abd4-518a27eda647\") " Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.721942 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58415397-b1c4-41c4-abd4-518a27eda647-catalog-content\") pod \"58415397-b1c4-41c4-abd4-518a27eda647\" (UID: \"58415397-b1c4-41c4-abd4-518a27eda647\") " Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.721983 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58415397-b1c4-41c4-abd4-518a27eda647-utilities\") pod \"58415397-b1c4-41c4-abd4-518a27eda647\" (UID: \"58415397-b1c4-41c4-abd4-518a27eda647\") " Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.722698 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58415397-b1c4-41c4-abd4-518a27eda647-utilities" (OuterVolumeSpecName: "utilities") pod "58415397-b1c4-41c4-abd4-518a27eda647" (UID: "58415397-b1c4-41c4-abd4-518a27eda647"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.728584 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58415397-b1c4-41c4-abd4-518a27eda647-kube-api-access-fdsw9" (OuterVolumeSpecName: "kube-api-access-fdsw9") pod "58415397-b1c4-41c4-abd4-518a27eda647" (UID: "58415397-b1c4-41c4-abd4-518a27eda647"). InnerVolumeSpecName "kube-api-access-fdsw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.729687 4693 scope.go:117] "RemoveContainer" containerID="1bc47c5c4fdac778a460a07334cd383766f17901c74c9d8f4ff97665c3709e74" Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.749657 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58415397-b1c4-41c4-abd4-518a27eda647-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58415397-b1c4-41c4-abd4-518a27eda647" (UID: "58415397-b1c4-41c4-abd4-518a27eda647"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.757674 4693 scope.go:117] "RemoveContainer" containerID="031f37acfb428dee64494a3681ae7b805b4c632630cd2ba4d6df8174adc43ae7" Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.779205 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-klr5t"] Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.779495 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-klr5t" podUID="6a461525-8c58-4454-b928-32dfc677061b" containerName="registry-server" containerID="cri-o://6ad7d97a7e824c759705dd4fcb3555907b680485433424f5b18ba6fdffb4cd60" gracePeriod=2 Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.784472 4693 scope.go:117] "RemoveContainer" containerID="1714907fe89887ef24be8084bdf19db55965350dcbbb5bb55acdbcca9ab041f0" Dec 12 15:51:57 crc kubenswrapper[4693]: E1212 15:51:57.785026 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1714907fe89887ef24be8084bdf19db55965350dcbbb5bb55acdbcca9ab041f0\": container with ID starting with 1714907fe89887ef24be8084bdf19db55965350dcbbb5bb55acdbcca9ab041f0 not found: ID does not exist" containerID="1714907fe89887ef24be8084bdf19db55965350dcbbb5bb55acdbcca9ab041f0" Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.785063 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1714907fe89887ef24be8084bdf19db55965350dcbbb5bb55acdbcca9ab041f0"} err="failed to get container status \"1714907fe89887ef24be8084bdf19db55965350dcbbb5bb55acdbcca9ab041f0\": rpc error: code = NotFound desc = could not find container \"1714907fe89887ef24be8084bdf19db55965350dcbbb5bb55acdbcca9ab041f0\": container with ID starting with 1714907fe89887ef24be8084bdf19db55965350dcbbb5bb55acdbcca9ab041f0 not found: ID does not exist" Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.785087 4693 scope.go:117] "RemoveContainer" containerID="1bc47c5c4fdac778a460a07334cd383766f17901c74c9d8f4ff97665c3709e74" Dec 12 15:51:57 crc kubenswrapper[4693]: E1212 15:51:57.785355 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bc47c5c4fdac778a460a07334cd383766f17901c74c9d8f4ff97665c3709e74\": container with ID starting with 1bc47c5c4fdac778a460a07334cd383766f17901c74c9d8f4ff97665c3709e74 not found: ID does not exist" containerID="1bc47c5c4fdac778a460a07334cd383766f17901c74c9d8f4ff97665c3709e74" Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.785379 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bc47c5c4fdac778a460a07334cd383766f17901c74c9d8f4ff97665c3709e74"} err="failed to get container status \"1bc47c5c4fdac778a460a07334cd383766f17901c74c9d8f4ff97665c3709e74\": rpc error: code = NotFound desc = could not find container \"1bc47c5c4fdac778a460a07334cd383766f17901c74c9d8f4ff97665c3709e74\": container with ID starting with 1bc47c5c4fdac778a460a07334cd383766f17901c74c9d8f4ff97665c3709e74 not found: ID does not exist" Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.785396 4693 scope.go:117] "RemoveContainer" containerID="031f37acfb428dee64494a3681ae7b805b4c632630cd2ba4d6df8174adc43ae7" Dec 12 15:51:57 crc kubenswrapper[4693]: E1212 15:51:57.785588 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"031f37acfb428dee64494a3681ae7b805b4c632630cd2ba4d6df8174adc43ae7\": container with ID starting with 031f37acfb428dee64494a3681ae7b805b4c632630cd2ba4d6df8174adc43ae7 not found: ID does not exist" containerID="031f37acfb428dee64494a3681ae7b805b4c632630cd2ba4d6df8174adc43ae7" Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.785615 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"031f37acfb428dee64494a3681ae7b805b4c632630cd2ba4d6df8174adc43ae7"} err="failed to get container status \"031f37acfb428dee64494a3681ae7b805b4c632630cd2ba4d6df8174adc43ae7\": rpc error: code = NotFound desc = could not find container \"031f37acfb428dee64494a3681ae7b805b4c632630cd2ba4d6df8174adc43ae7\": container with ID starting with 031f37acfb428dee64494a3681ae7b805b4c632630cd2ba4d6df8174adc43ae7 not found: ID does not exist" Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.823533 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdsw9\" (UniqueName: \"kubernetes.io/projected/58415397-b1c4-41c4-abd4-518a27eda647-kube-api-access-fdsw9\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.823566 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58415397-b1c4-41c4-abd4-518a27eda647-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:57 crc kubenswrapper[4693]: I1212 15:51:57.823575 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58415397-b1c4-41c4-abd4-518a27eda647-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.045620 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb99c"] Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.050127 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb99c"] Dec 12 15:51:58 crc kubenswrapper[4693]: E1212 15:51:58.172638 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6ad7d97a7e824c759705dd4fcb3555907b680485433424f5b18ba6fdffb4cd60 is running failed: container process not found" containerID="6ad7d97a7e824c759705dd4fcb3555907b680485433424f5b18ba6fdffb4cd60" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 15:51:58 crc kubenswrapper[4693]: E1212 15:51:58.172926 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6ad7d97a7e824c759705dd4fcb3555907b680485433424f5b18ba6fdffb4cd60 is running failed: container process not found" containerID="6ad7d97a7e824c759705dd4fcb3555907b680485433424f5b18ba6fdffb4cd60" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 15:51:58 crc kubenswrapper[4693]: E1212 15:51:58.173201 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6ad7d97a7e824c759705dd4fcb3555907b680485433424f5b18ba6fdffb4cd60 is running failed: container process not found" containerID="6ad7d97a7e824c759705dd4fcb3555907b680485433424f5b18ba6fdffb4cd60" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 15:51:58 crc kubenswrapper[4693]: E1212 15:51:58.173233 4693 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6ad7d97a7e824c759705dd4fcb3555907b680485433424f5b18ba6fdffb4cd60 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-klr5t" podUID="6a461525-8c58-4454-b928-32dfc677061b" containerName="registry-server" Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.184470 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klr5t" Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.229101 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a461525-8c58-4454-b928-32dfc677061b-utilities\") pod \"6a461525-8c58-4454-b928-32dfc677061b\" (UID: \"6a461525-8c58-4454-b928-32dfc677061b\") " Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.229184 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw7vg\" (UniqueName: \"kubernetes.io/projected/6a461525-8c58-4454-b928-32dfc677061b-kube-api-access-bw7vg\") pod \"6a461525-8c58-4454-b928-32dfc677061b\" (UID: \"6a461525-8c58-4454-b928-32dfc677061b\") " Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.229371 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a461525-8c58-4454-b928-32dfc677061b-catalog-content\") pod \"6a461525-8c58-4454-b928-32dfc677061b\" (UID: \"6a461525-8c58-4454-b928-32dfc677061b\") " Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.230496 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a461525-8c58-4454-b928-32dfc677061b-utilities" (OuterVolumeSpecName: "utilities") pod "6a461525-8c58-4454-b928-32dfc677061b" (UID: "6a461525-8c58-4454-b928-32dfc677061b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.233814 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a461525-8c58-4454-b928-32dfc677061b-kube-api-access-bw7vg" (OuterVolumeSpecName: "kube-api-access-bw7vg") pod "6a461525-8c58-4454-b928-32dfc677061b" (UID: "6a461525-8c58-4454-b928-32dfc677061b"). InnerVolumeSpecName "kube-api-access-bw7vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.330741 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a461525-8c58-4454-b928-32dfc677061b-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.330772 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw7vg\" (UniqueName: \"kubernetes.io/projected/6a461525-8c58-4454-b928-32dfc677061b-kube-api-access-bw7vg\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.340823 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a461525-8c58-4454-b928-32dfc677061b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a461525-8c58-4454-b928-32dfc677061b" (UID: "6a461525-8c58-4454-b928-32dfc677061b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.431847 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a461525-8c58-4454-b928-32dfc677061b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.725079 4693 generic.go:334] "Generic (PLEG): container finished" podID="6a461525-8c58-4454-b928-32dfc677061b" containerID="6ad7d97a7e824c759705dd4fcb3555907b680485433424f5b18ba6fdffb4cd60" exitCode=0 Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.725139 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klr5t" Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.725177 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klr5t" event={"ID":"6a461525-8c58-4454-b928-32dfc677061b","Type":"ContainerDied","Data":"6ad7d97a7e824c759705dd4fcb3555907b680485433424f5b18ba6fdffb4cd60"} Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.725222 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klr5t" event={"ID":"6a461525-8c58-4454-b928-32dfc677061b","Type":"ContainerDied","Data":"ee31bd8582cab08f405b8b5cffd7550217beb91dcbe54284bbfb5e3acba49d46"} Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.725246 4693 scope.go:117] "RemoveContainer" containerID="6ad7d97a7e824c759705dd4fcb3555907b680485433424f5b18ba6fdffb4cd60" Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.762948 4693 scope.go:117] "RemoveContainer" containerID="755f303c7fae100c87c87307a41f01ac030c685925927c7b6d08fd5f866be186" Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.772762 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-klr5t"] Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.777724 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-klr5t"] Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.801094 4693 scope.go:117] "RemoveContainer" containerID="761afdfa409c1959e9320f860aff9f38444f604e8348376ace34e21ece382b38" Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.815546 4693 scope.go:117] "RemoveContainer" containerID="6ad7d97a7e824c759705dd4fcb3555907b680485433424f5b18ba6fdffb4cd60" Dec 12 15:51:58 crc kubenswrapper[4693]: E1212 15:51:58.815893 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ad7d97a7e824c759705dd4fcb3555907b680485433424f5b18ba6fdffb4cd60\": container with ID starting with 6ad7d97a7e824c759705dd4fcb3555907b680485433424f5b18ba6fdffb4cd60 not found: ID does not exist" containerID="6ad7d97a7e824c759705dd4fcb3555907b680485433424f5b18ba6fdffb4cd60" Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.815926 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ad7d97a7e824c759705dd4fcb3555907b680485433424f5b18ba6fdffb4cd60"} err="failed to get container status \"6ad7d97a7e824c759705dd4fcb3555907b680485433424f5b18ba6fdffb4cd60\": rpc error: code = NotFound desc = could not find container \"6ad7d97a7e824c759705dd4fcb3555907b680485433424f5b18ba6fdffb4cd60\": container with ID starting with 6ad7d97a7e824c759705dd4fcb3555907b680485433424f5b18ba6fdffb4cd60 not found: ID does not exist" Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.815950 4693 scope.go:117] "RemoveContainer" containerID="755f303c7fae100c87c87307a41f01ac030c685925927c7b6d08fd5f866be186" Dec 12 15:51:58 crc kubenswrapper[4693]: E1212 15:51:58.816188 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"755f303c7fae100c87c87307a41f01ac030c685925927c7b6d08fd5f866be186\": container with ID starting with 755f303c7fae100c87c87307a41f01ac030c685925927c7b6d08fd5f866be186 not found: ID does not exist" containerID="755f303c7fae100c87c87307a41f01ac030c685925927c7b6d08fd5f866be186" Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.816209 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"755f303c7fae100c87c87307a41f01ac030c685925927c7b6d08fd5f866be186"} err="failed to get container status \"755f303c7fae100c87c87307a41f01ac030c685925927c7b6d08fd5f866be186\": rpc error: code = NotFound desc = could not find container \"755f303c7fae100c87c87307a41f01ac030c685925927c7b6d08fd5f866be186\": container with ID starting with 755f303c7fae100c87c87307a41f01ac030c685925927c7b6d08fd5f866be186 not found: ID does not exist" Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.816221 4693 scope.go:117] "RemoveContainer" containerID="761afdfa409c1959e9320f860aff9f38444f604e8348376ace34e21ece382b38" Dec 12 15:51:58 crc kubenswrapper[4693]: E1212 15:51:58.816595 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"761afdfa409c1959e9320f860aff9f38444f604e8348376ace34e21ece382b38\": container with ID starting with 761afdfa409c1959e9320f860aff9f38444f604e8348376ace34e21ece382b38 not found: ID does not exist" containerID="761afdfa409c1959e9320f860aff9f38444f604e8348376ace34e21ece382b38" Dec 12 15:51:58 crc kubenswrapper[4693]: I1212 15:51:58.816616 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"761afdfa409c1959e9320f860aff9f38444f604e8348376ace34e21ece382b38"} err="failed to get container status \"761afdfa409c1959e9320f860aff9f38444f604e8348376ace34e21ece382b38\": rpc error: code = NotFound desc = could not find container \"761afdfa409c1959e9320f860aff9f38444f604e8348376ace34e21ece382b38\": container with ID starting with 761afdfa409c1959e9320f860aff9f38444f604e8348376ace34e21ece382b38 not found: ID does not exist" Dec 12 15:51:59 crc kubenswrapper[4693]: I1212 15:51:59.364519 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58415397-b1c4-41c4-abd4-518a27eda647" path="/var/lib/kubelet/pods/58415397-b1c4-41c4-abd4-518a27eda647/volumes" Dec 12 15:51:59 crc kubenswrapper[4693]: I1212 15:51:59.365260 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a461525-8c58-4454-b928-32dfc677061b" path="/var/lib/kubelet/pods/6a461525-8c58-4454-b928-32dfc677061b/volumes" Dec 12 15:52:12 crc kubenswrapper[4693]: I1212 15:52:12.530633 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:52:12 crc kubenswrapper[4693]: I1212 15:52:12.531176 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:52:30 crc kubenswrapper[4693]: I1212 15:52:30.571705 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z"] Dec 12 15:52:30 crc kubenswrapper[4693]: I1212 15:52:30.572630 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" podUID="d7490206-a510-4180-baeb-1064f8536458" containerName="route-controller-manager" containerID="cri-o://e19a92003e74e97a2390b61995fb826d311f4b30490f75c208daf86ed09cbb9c" gracePeriod=30 Dec 12 15:52:30 crc kubenswrapper[4693]: I1212 15:52:30.913077 4693 generic.go:334] "Generic (PLEG): container finished" podID="d7490206-a510-4180-baeb-1064f8536458" containerID="e19a92003e74e97a2390b61995fb826d311f4b30490f75c208daf86ed09cbb9c" exitCode=0 Dec 12 15:52:30 crc kubenswrapper[4693]: I1212 15:52:30.913170 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" event={"ID":"d7490206-a510-4180-baeb-1064f8536458","Type":"ContainerDied","Data":"e19a92003e74e97a2390b61995fb826d311f4b30490f75c208daf86ed09cbb9c"} Dec 12 15:52:30 crc kubenswrapper[4693]: I1212 15:52:30.936647 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" Dec 12 15:52:31 crc kubenswrapper[4693]: I1212 15:52:31.071837 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7490206-a510-4180-baeb-1064f8536458-serving-cert\") pod \"d7490206-a510-4180-baeb-1064f8536458\" (UID: \"d7490206-a510-4180-baeb-1064f8536458\") " Dec 12 15:52:31 crc kubenswrapper[4693]: I1212 15:52:31.071900 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz9tn\" (UniqueName: \"kubernetes.io/projected/d7490206-a510-4180-baeb-1064f8536458-kube-api-access-rz9tn\") pod \"d7490206-a510-4180-baeb-1064f8536458\" (UID: \"d7490206-a510-4180-baeb-1064f8536458\") " Dec 12 15:52:31 crc kubenswrapper[4693]: I1212 15:52:31.071939 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7490206-a510-4180-baeb-1064f8536458-client-ca\") pod \"d7490206-a510-4180-baeb-1064f8536458\" (UID: \"d7490206-a510-4180-baeb-1064f8536458\") " Dec 12 15:52:31 crc kubenswrapper[4693]: I1212 15:52:31.071965 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7490206-a510-4180-baeb-1064f8536458-config\") pod \"d7490206-a510-4180-baeb-1064f8536458\" (UID: \"d7490206-a510-4180-baeb-1064f8536458\") " Dec 12 15:52:31 crc kubenswrapper[4693]: I1212 15:52:31.073048 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7490206-a510-4180-baeb-1064f8536458-client-ca" (OuterVolumeSpecName: "client-ca") pod "d7490206-a510-4180-baeb-1064f8536458" (UID: "d7490206-a510-4180-baeb-1064f8536458"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:52:31 crc kubenswrapper[4693]: I1212 15:52:31.073074 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7490206-a510-4180-baeb-1064f8536458-config" (OuterVolumeSpecName: "config") pod "d7490206-a510-4180-baeb-1064f8536458" (UID: "d7490206-a510-4180-baeb-1064f8536458"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:52:31 crc kubenswrapper[4693]: I1212 15:52:31.078113 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7490206-a510-4180-baeb-1064f8536458-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7490206-a510-4180-baeb-1064f8536458" (UID: "d7490206-a510-4180-baeb-1064f8536458"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:52:31 crc kubenswrapper[4693]: I1212 15:52:31.078911 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7490206-a510-4180-baeb-1064f8536458-kube-api-access-rz9tn" (OuterVolumeSpecName: "kube-api-access-rz9tn") pod "d7490206-a510-4180-baeb-1064f8536458" (UID: "d7490206-a510-4180-baeb-1064f8536458"). InnerVolumeSpecName "kube-api-access-rz9tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:52:31 crc kubenswrapper[4693]: I1212 15:52:31.173061 4693 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7490206-a510-4180-baeb-1064f8536458-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:52:31 crc kubenswrapper[4693]: I1212 15:52:31.173100 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rz9tn\" (UniqueName: \"kubernetes.io/projected/d7490206-a510-4180-baeb-1064f8536458-kube-api-access-rz9tn\") on node \"crc\" DevicePath \"\"" Dec 12 15:52:31 crc kubenswrapper[4693]: I1212 15:52:31.173114 4693 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7490206-a510-4180-baeb-1064f8536458-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:52:31 crc kubenswrapper[4693]: I1212 15:52:31.173124 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7490206-a510-4180-baeb-1064f8536458-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:52:31 crc kubenswrapper[4693]: I1212 15:52:31.922300 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" event={"ID":"d7490206-a510-4180-baeb-1064f8536458","Type":"ContainerDied","Data":"c17a2b8e6215ee106455ce13fcc890b25983c7590171ca5858a894d7d696fc25"} Dec 12 15:52:31 crc kubenswrapper[4693]: I1212 15:52:31.922351 4693 scope.go:117] "RemoveContainer" containerID="e19a92003e74e97a2390b61995fb826d311f4b30490f75c208daf86ed09cbb9c" Dec 12 15:52:31 crc kubenswrapper[4693]: I1212 15:52:31.922360 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z" Dec 12 15:52:31 crc kubenswrapper[4693]: I1212 15:52:31.948473 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z"] Dec 12 15:52:31 crc kubenswrapper[4693]: I1212 15:52:31.953425 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb87b5856-vz82z"] Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.363013 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d"] Dec 12 15:52:32 crc kubenswrapper[4693]: E1212 15:52:32.363303 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7490206-a510-4180-baeb-1064f8536458" containerName="route-controller-manager" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.363325 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7490206-a510-4180-baeb-1064f8536458" containerName="route-controller-manager" Dec 12 15:52:32 crc kubenswrapper[4693]: E1212 15:52:32.363339 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58415397-b1c4-41c4-abd4-518a27eda647" containerName="registry-server" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.363348 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="58415397-b1c4-41c4-abd4-518a27eda647" containerName="registry-server" Dec 12 15:52:32 crc kubenswrapper[4693]: E1212 15:52:32.363361 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" containerName="extract-utilities" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.363371 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" containerName="extract-utilities" Dec 12 15:52:32 crc kubenswrapper[4693]: E1212 15:52:32.363410 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a461525-8c58-4454-b928-32dfc677061b" containerName="extract-content" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.363419 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a461525-8c58-4454-b928-32dfc677061b" containerName="extract-content" Dec 12 15:52:32 crc kubenswrapper[4693]: E1212 15:52:32.363427 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" containerName="extract-content" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.363437 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" containerName="extract-content" Dec 12 15:52:32 crc kubenswrapper[4693]: E1212 15:52:32.363453 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a461525-8c58-4454-b928-32dfc677061b" containerName="extract-utilities" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.363461 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a461525-8c58-4454-b928-32dfc677061b" containerName="extract-utilities" Dec 12 15:52:32 crc kubenswrapper[4693]: E1212 15:52:32.363473 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" containerName="registry-server" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.363480 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" containerName="registry-server" Dec 12 15:52:32 crc kubenswrapper[4693]: E1212 15:52:32.363492 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a461525-8c58-4454-b928-32dfc677061b" containerName="registry-server" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.363500 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a461525-8c58-4454-b928-32dfc677061b" containerName="registry-server" Dec 12 15:52:32 crc kubenswrapper[4693]: E1212 15:52:32.363510 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58415397-b1c4-41c4-abd4-518a27eda647" containerName="extract-utilities" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.363517 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="58415397-b1c4-41c4-abd4-518a27eda647" containerName="extract-utilities" Dec 12 15:52:32 crc kubenswrapper[4693]: E1212 15:52:32.363530 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58415397-b1c4-41c4-abd4-518a27eda647" containerName="extract-content" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.363538 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="58415397-b1c4-41c4-abd4-518a27eda647" containerName="extract-content" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.363671 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a461525-8c58-4454-b928-32dfc677061b" containerName="registry-server" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.363683 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7490206-a510-4180-baeb-1064f8536458" containerName="route-controller-manager" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.363694 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="77421421-26f5-4e9a-8857-bd1f5a9d8fa9" containerName="registry-server" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.363706 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="58415397-b1c4-41c4-abd4-518a27eda647" containerName="registry-server" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.364161 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.366810 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.367631 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.367635 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.367734 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.367859 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.373403 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.375452 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d"] Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.491408 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41b94683-51bf-4720-9160-36bd373d88ba-serving-cert\") pod \"route-controller-manager-66fdbf566b-4w29d\" (UID: \"41b94683-51bf-4720-9160-36bd373d88ba\") " pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.491476 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41b94683-51bf-4720-9160-36bd373d88ba-client-ca\") pod \"route-controller-manager-66fdbf566b-4w29d\" (UID: \"41b94683-51bf-4720-9160-36bd373d88ba\") " pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.491542 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41b94683-51bf-4720-9160-36bd373d88ba-config\") pod \"route-controller-manager-66fdbf566b-4w29d\" (UID: \"41b94683-51bf-4720-9160-36bd373d88ba\") " pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.491777 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlghr\" (UniqueName: \"kubernetes.io/projected/41b94683-51bf-4720-9160-36bd373d88ba-kube-api-access-qlghr\") pod \"route-controller-manager-66fdbf566b-4w29d\" (UID: \"41b94683-51bf-4720-9160-36bd373d88ba\") " pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.593413 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlghr\" (UniqueName: \"kubernetes.io/projected/41b94683-51bf-4720-9160-36bd373d88ba-kube-api-access-qlghr\") pod \"route-controller-manager-66fdbf566b-4w29d\" (UID: \"41b94683-51bf-4720-9160-36bd373d88ba\") " pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.593506 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41b94683-51bf-4720-9160-36bd373d88ba-serving-cert\") pod \"route-controller-manager-66fdbf566b-4w29d\" (UID: \"41b94683-51bf-4720-9160-36bd373d88ba\") " pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.593527 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41b94683-51bf-4720-9160-36bd373d88ba-client-ca\") pod \"route-controller-manager-66fdbf566b-4w29d\" (UID: \"41b94683-51bf-4720-9160-36bd373d88ba\") " pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.593569 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41b94683-51bf-4720-9160-36bd373d88ba-config\") pod \"route-controller-manager-66fdbf566b-4w29d\" (UID: \"41b94683-51bf-4720-9160-36bd373d88ba\") " pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.594641 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41b94683-51bf-4720-9160-36bd373d88ba-client-ca\") pod \"route-controller-manager-66fdbf566b-4w29d\" (UID: \"41b94683-51bf-4720-9160-36bd373d88ba\") " pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.594777 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41b94683-51bf-4720-9160-36bd373d88ba-config\") pod \"route-controller-manager-66fdbf566b-4w29d\" (UID: \"41b94683-51bf-4720-9160-36bd373d88ba\") " pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.600109 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41b94683-51bf-4720-9160-36bd373d88ba-serving-cert\") pod \"route-controller-manager-66fdbf566b-4w29d\" (UID: \"41b94683-51bf-4720-9160-36bd373d88ba\") " pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.614210 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlghr\" (UniqueName: \"kubernetes.io/projected/41b94683-51bf-4720-9160-36bd373d88ba-kube-api-access-qlghr\") pod \"route-controller-manager-66fdbf566b-4w29d\" (UID: \"41b94683-51bf-4720-9160-36bd373d88ba\") " pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.682075 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" Dec 12 15:52:32 crc kubenswrapper[4693]: I1212 15:52:32.936461 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d"] Dec 12 15:52:33 crc kubenswrapper[4693]: I1212 15:52:33.364427 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7490206-a510-4180-baeb-1064f8536458" path="/var/lib/kubelet/pods/d7490206-a510-4180-baeb-1064f8536458/volumes" Dec 12 15:52:33 crc kubenswrapper[4693]: I1212 15:52:33.938414 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" event={"ID":"41b94683-51bf-4720-9160-36bd373d88ba","Type":"ContainerStarted","Data":"05594e0e6948065007460b75a1cefd9b075372db15fd8c62321575e79d1e6ee9"} Dec 12 15:52:33 crc kubenswrapper[4693]: I1212 15:52:33.938472 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" event={"ID":"41b94683-51bf-4720-9160-36bd373d88ba","Type":"ContainerStarted","Data":"f68dd8febe2f0ccb881d1faed21e32579fdaa11feeebbadc1b78556a3432bc74"} Dec 12 15:52:33 crc kubenswrapper[4693]: I1212 15:52:33.939012 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" Dec 12 15:52:33 crc kubenswrapper[4693]: I1212 15:52:33.946449 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" Dec 12 15:52:33 crc kubenswrapper[4693]: I1212 15:52:33.964875 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" podStartSLOduration=3.964846856 podStartE2EDuration="3.964846856s" podCreationTimestamp="2025-12-12 15:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:52:33.959968908 +0000 UTC m=+381.128608509" watchObservedRunningTime="2025-12-12 15:52:33.964846856 +0000 UTC m=+381.133486457" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.158344 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t2jb4"] Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.159348 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.171580 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t2jb4"] Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.230886 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2132cae9-b4b7-48be-bb0a-482d215417af-registry-tls\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.230948 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2132cae9-b4b7-48be-bb0a-482d215417af-trusted-ca\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.231031 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2132cae9-b4b7-48be-bb0a-482d215417af-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.231062 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2132cae9-b4b7-48be-bb0a-482d215417af-bound-sa-token\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.231204 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2132cae9-b4b7-48be-bb0a-482d215417af-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.231320 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2132cae9-b4b7-48be-bb0a-482d215417af-registry-certificates\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.231374 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n57kb\" (UniqueName: \"kubernetes.io/projected/2132cae9-b4b7-48be-bb0a-482d215417af-kube-api-access-n57kb\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.231425 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.259277 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.332189 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2132cae9-b4b7-48be-bb0a-482d215417af-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.332243 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2132cae9-b4b7-48be-bb0a-482d215417af-bound-sa-token\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.332311 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2132cae9-b4b7-48be-bb0a-482d215417af-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.332349 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2132cae9-b4b7-48be-bb0a-482d215417af-registry-certificates\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.332368 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n57kb\" (UniqueName: \"kubernetes.io/projected/2132cae9-b4b7-48be-bb0a-482d215417af-kube-api-access-n57kb\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.332397 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2132cae9-b4b7-48be-bb0a-482d215417af-registry-tls\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.332413 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2132cae9-b4b7-48be-bb0a-482d215417af-trusted-ca\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.333149 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2132cae9-b4b7-48be-bb0a-482d215417af-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.333817 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2132cae9-b4b7-48be-bb0a-482d215417af-trusted-ca\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.334149 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2132cae9-b4b7-48be-bb0a-482d215417af-registry-certificates\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.342325 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2132cae9-b4b7-48be-bb0a-482d215417af-registry-tls\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.344647 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2132cae9-b4b7-48be-bb0a-482d215417af-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.354733 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2132cae9-b4b7-48be-bb0a-482d215417af-bound-sa-token\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.358973 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n57kb\" (UniqueName: \"kubernetes.io/projected/2132cae9-b4b7-48be-bb0a-482d215417af-kube-api-access-n57kb\") pod \"image-registry-66df7c8f76-t2jb4\" (UID: \"2132cae9-b4b7-48be-bb0a-482d215417af\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.523723 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.925263 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t2jb4"] Dec 12 15:52:35 crc kubenswrapper[4693]: W1212 15:52:35.934966 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2132cae9_b4b7_48be_bb0a_482d215417af.slice/crio-5f91fa6c03aaa1f9b97053e506e3eb58f680e5c8a79a88c260b32e5d906dd22b WatchSource:0}: Error finding container 5f91fa6c03aaa1f9b97053e506e3eb58f680e5c8a79a88c260b32e5d906dd22b: Status 404 returned error can't find the container with id 5f91fa6c03aaa1f9b97053e506e3eb58f680e5c8a79a88c260b32e5d906dd22b Dec 12 15:52:35 crc kubenswrapper[4693]: I1212 15:52:35.951689 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" event={"ID":"2132cae9-b4b7-48be-bb0a-482d215417af","Type":"ContainerStarted","Data":"5f91fa6c03aaa1f9b97053e506e3eb58f680e5c8a79a88c260b32e5d906dd22b"} Dec 12 15:52:36 crc kubenswrapper[4693]: I1212 15:52:36.958784 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" event={"ID":"2132cae9-b4b7-48be-bb0a-482d215417af","Type":"ContainerStarted","Data":"77868e10503e3aa692872bf456c07de2207ad0da53034216738d30e9c49ad5f0"} Dec 12 15:52:36 crc kubenswrapper[4693]: I1212 15:52:36.959520 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:36 crc kubenswrapper[4693]: I1212 15:52:36.984765 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" podStartSLOduration=1.984742585 podStartE2EDuration="1.984742585s" podCreationTimestamp="2025-12-12 15:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:52:36.982264645 +0000 UTC m=+384.150904246" watchObservedRunningTime="2025-12-12 15:52:36.984742585 +0000 UTC m=+384.153382186" Dec 12 15:52:42 crc kubenswrapper[4693]: I1212 15:52:42.530430 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:52:42 crc kubenswrapper[4693]: I1212 15:52:42.530845 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:52:55 crc kubenswrapper[4693]: I1212 15:52:55.530444 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" Dec 12 15:52:55 crc kubenswrapper[4693]: I1212 15:52:55.593289 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-47c86"] Dec 12 15:53:12 crc kubenswrapper[4693]: I1212 15:53:12.530715 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:53:12 crc kubenswrapper[4693]: I1212 15:53:12.532004 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:53:12 crc kubenswrapper[4693]: I1212 15:53:12.532154 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 15:53:13 crc kubenswrapper[4693]: I1212 15:53:13.177955 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ff91bd354fd1b1d52f5914f816ce98932ace1f4aced9a2d721aa0982cc50f10"} pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 15:53:13 crc kubenswrapper[4693]: I1212 15:53:13.178091 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" containerID="cri-o://5ff91bd354fd1b1d52f5914f816ce98932ace1f4aced9a2d721aa0982cc50f10" gracePeriod=600 Dec 12 15:53:14 crc kubenswrapper[4693]: I1212 15:53:14.184438 4693 generic.go:334] "Generic (PLEG): container finished" podID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerID="5ff91bd354fd1b1d52f5914f816ce98932ace1f4aced9a2d721aa0982cc50f10" exitCode=0 Dec 12 15:53:14 crc kubenswrapper[4693]: I1212 15:53:14.184523 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerDied","Data":"5ff91bd354fd1b1d52f5914f816ce98932ace1f4aced9a2d721aa0982cc50f10"} Dec 12 15:53:14 crc kubenswrapper[4693]: I1212 15:53:14.184742 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerStarted","Data":"13e55d11ade86a76f1b5f387056a50a27a64fcc2f93e4354f4e32727ed6ed0c7"} Dec 12 15:53:14 crc kubenswrapper[4693]: I1212 15:53:14.184763 4693 scope.go:117] "RemoveContainer" containerID="37c3cb0141107d54a08958ab4dd5b8bd356a91418425db24b626b3dcc225fd28" Dec 12 15:53:20 crc kubenswrapper[4693]: I1212 15:53:20.640042 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-47c86" podUID="f28a792f-4814-4a24-ab79-3a5b00adb25e" containerName="registry" containerID="cri-o://a73d5a8db64bf11ebef58f441837303b937e0976353efaa0b70ac52f113b6a99" gracePeriod=30 Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.005954 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.034436 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f28a792f-4814-4a24-ab79-3a5b00adb25e-registry-tls\") pod \"f28a792f-4814-4a24-ab79-3a5b00adb25e\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.034525 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f28a792f-4814-4a24-ab79-3a5b00adb25e-installation-pull-secrets\") pod \"f28a792f-4814-4a24-ab79-3a5b00adb25e\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.034678 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"f28a792f-4814-4a24-ab79-3a5b00adb25e\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.036726 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f28a792f-4814-4a24-ab79-3a5b00adb25e-trusted-ca\") pod \"f28a792f-4814-4a24-ab79-3a5b00adb25e\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.036777 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f28a792f-4814-4a24-ab79-3a5b00adb25e-bound-sa-token\") pod \"f28a792f-4814-4a24-ab79-3a5b00adb25e\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.036802 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f28a792f-4814-4a24-ab79-3a5b00adb25e-ca-trust-extracted\") pod \"f28a792f-4814-4a24-ab79-3a5b00adb25e\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.036821 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f28a792f-4814-4a24-ab79-3a5b00adb25e-registry-certificates\") pod \"f28a792f-4814-4a24-ab79-3a5b00adb25e\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.036850 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv2vw\" (UniqueName: \"kubernetes.io/projected/f28a792f-4814-4a24-ab79-3a5b00adb25e-kube-api-access-qv2vw\") pod \"f28a792f-4814-4a24-ab79-3a5b00adb25e\" (UID: \"f28a792f-4814-4a24-ab79-3a5b00adb25e\") " Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.038143 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f28a792f-4814-4a24-ab79-3a5b00adb25e-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "f28a792f-4814-4a24-ab79-3a5b00adb25e" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.039003 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f28a792f-4814-4a24-ab79-3a5b00adb25e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "f28a792f-4814-4a24-ab79-3a5b00adb25e" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.043497 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f28a792f-4814-4a24-ab79-3a5b00adb25e-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "f28a792f-4814-4a24-ab79-3a5b00adb25e" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.043886 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f28a792f-4814-4a24-ab79-3a5b00adb25e-kube-api-access-qv2vw" (OuterVolumeSpecName: "kube-api-access-qv2vw") pod "f28a792f-4814-4a24-ab79-3a5b00adb25e" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e"). InnerVolumeSpecName "kube-api-access-qv2vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.045396 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f28a792f-4814-4a24-ab79-3a5b00adb25e-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "f28a792f-4814-4a24-ab79-3a5b00adb25e" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.045920 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f28a792f-4814-4a24-ab79-3a5b00adb25e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "f28a792f-4814-4a24-ab79-3a5b00adb25e" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.048980 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "f28a792f-4814-4a24-ab79-3a5b00adb25e" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.060485 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f28a792f-4814-4a24-ab79-3a5b00adb25e-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "f28a792f-4814-4a24-ab79-3a5b00adb25e" (UID: "f28a792f-4814-4a24-ab79-3a5b00adb25e"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.138533 4693 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f28a792f-4814-4a24-ab79-3a5b00adb25e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.138808 4693 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f28a792f-4814-4a24-ab79-3a5b00adb25e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.138906 4693 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f28a792f-4814-4a24-ab79-3a5b00adb25e-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.139032 4693 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f28a792f-4814-4a24-ab79-3a5b00adb25e-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.139127 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv2vw\" (UniqueName: \"kubernetes.io/projected/f28a792f-4814-4a24-ab79-3a5b00adb25e-kube-api-access-qv2vw\") on node \"crc\" DevicePath \"\"" Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.139215 4693 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f28a792f-4814-4a24-ab79-3a5b00adb25e-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.139344 4693 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f28a792f-4814-4a24-ab79-3a5b00adb25e-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.236312 4693 generic.go:334] "Generic (PLEG): container finished" podID="f28a792f-4814-4a24-ab79-3a5b00adb25e" containerID="a73d5a8db64bf11ebef58f441837303b937e0976353efaa0b70ac52f113b6a99" exitCode=0 Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.236363 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-47c86" event={"ID":"f28a792f-4814-4a24-ab79-3a5b00adb25e","Type":"ContainerDied","Data":"a73d5a8db64bf11ebef58f441837303b937e0976353efaa0b70ac52f113b6a99"} Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.236393 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-47c86" event={"ID":"f28a792f-4814-4a24-ab79-3a5b00adb25e","Type":"ContainerDied","Data":"9e175ac2f88e84c619075a72e01b2f623d0d34699bc1e6a26a67dc75cb0e7d51"} Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.236416 4693 scope.go:117] "RemoveContainer" containerID="a73d5a8db64bf11ebef58f441837303b937e0976353efaa0b70ac52f113b6a99" Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.236470 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-47c86" Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.272151 4693 scope.go:117] "RemoveContainer" containerID="a73d5a8db64bf11ebef58f441837303b937e0976353efaa0b70ac52f113b6a99" Dec 12 15:53:21 crc kubenswrapper[4693]: E1212 15:53:21.273528 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a73d5a8db64bf11ebef58f441837303b937e0976353efaa0b70ac52f113b6a99\": container with ID starting with a73d5a8db64bf11ebef58f441837303b937e0976353efaa0b70ac52f113b6a99 not found: ID does not exist" containerID="a73d5a8db64bf11ebef58f441837303b937e0976353efaa0b70ac52f113b6a99" Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.273600 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a73d5a8db64bf11ebef58f441837303b937e0976353efaa0b70ac52f113b6a99"} err="failed to get container status \"a73d5a8db64bf11ebef58f441837303b937e0976353efaa0b70ac52f113b6a99\": rpc error: code = NotFound desc = could not find container \"a73d5a8db64bf11ebef58f441837303b937e0976353efaa0b70ac52f113b6a99\": container with ID starting with a73d5a8db64bf11ebef58f441837303b937e0976353efaa0b70ac52f113b6a99 not found: ID does not exist" Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.293660 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-47c86"] Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.298539 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-47c86"] Dec 12 15:53:21 crc kubenswrapper[4693]: I1212 15:53:21.367598 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f28a792f-4814-4a24-ab79-3a5b00adb25e" path="/var/lib/kubelet/pods/f28a792f-4814-4a24-ab79-3a5b00adb25e/volumes" Dec 12 15:54:18 crc kubenswrapper[4693]: I1212 15:54:18.768422 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v9pf7"] Dec 12 15:54:18 crc kubenswrapper[4693]: I1212 15:54:18.769307 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v9pf7" podUID="e35b458b-b638-4684-8f5b-bcf2d0cf692f" containerName="registry-server" containerID="cri-o://5fb0864b6220c1237c6b82b3ef2a2326262cfc24254f5127241e85a9be74fc05" gracePeriod=30 Dec 12 15:54:18 crc kubenswrapper[4693]: I1212 15:54:18.781432 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fvk2k"] Dec 12 15:54:18 crc kubenswrapper[4693]: I1212 15:54:18.781935 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fvk2k" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" containerName="registry-server" containerID="cri-o://e97bb28cd0d861b09997b0d258a8c2d03c90fd82ff7e726fff922138bf023191" gracePeriod=30 Dec 12 15:54:18 crc kubenswrapper[4693]: I1212 15:54:18.795500 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-npwzs"] Dec 12 15:54:18 crc kubenswrapper[4693]: I1212 15:54:18.795758 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" podUID="7b1c4746-f772-49d8-be11-9abc850ea7e2" containerName="marketplace-operator" containerID="cri-o://1d1f1c715f9ccd1a4813d54250ca62c09726b8a742de00ccb60a1693b4826edf" gracePeriod=30 Dec 12 15:54:18 crc kubenswrapper[4693]: I1212 15:54:18.803072 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4lmj"] Dec 12 15:54:18 crc kubenswrapper[4693]: I1212 15:54:18.803429 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q4lmj" podUID="1790176a-e8f5-4490-b020-53392f0475cc" containerName="registry-server" containerID="cri-o://3524de0d28f3353e735b1679611a0661782e5f8de7f62f3c99bd35558958631e" gracePeriod=30 Dec 12 15:54:18 crc kubenswrapper[4693]: I1212 15:54:18.814198 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mglqp"] Dec 12 15:54:18 crc kubenswrapper[4693]: E1212 15:54:18.814468 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f28a792f-4814-4a24-ab79-3a5b00adb25e" containerName="registry" Dec 12 15:54:18 crc kubenswrapper[4693]: I1212 15:54:18.814485 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f28a792f-4814-4a24-ab79-3a5b00adb25e" containerName="registry" Dec 12 15:54:18 crc kubenswrapper[4693]: I1212 15:54:18.814607 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="f28a792f-4814-4a24-ab79-3a5b00adb25e" containerName="registry" Dec 12 15:54:18 crc kubenswrapper[4693]: I1212 15:54:18.815061 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mglqp" Dec 12 15:54:18 crc kubenswrapper[4693]: I1212 15:54:18.828514 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zmcqt"] Dec 12 15:54:18 crc kubenswrapper[4693]: I1212 15:54:18.828839 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zmcqt" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" containerName="registry-server" containerID="cri-o://5a6d9d11149436685192e96aa236eeaf60aeff9528605ea6dd85a485728d4980" gracePeriod=30 Dec 12 15:54:18 crc kubenswrapper[4693]: I1212 15:54:18.833614 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mglqp"] Dec 12 15:54:18 crc kubenswrapper[4693]: I1212 15:54:18.924446 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fef1de87-a0ab-4a6e-9b37-d446cf2ec47e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mglqp\" (UID: \"fef1de87-a0ab-4a6e-9b37-d446cf2ec47e\") " pod="openshift-marketplace/marketplace-operator-79b997595-mglqp" Dec 12 15:54:18 crc kubenswrapper[4693]: I1212 15:54:18.924497 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fef1de87-a0ab-4a6e-9b37-d446cf2ec47e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mglqp\" (UID: \"fef1de87-a0ab-4a6e-9b37-d446cf2ec47e\") " pod="openshift-marketplace/marketplace-operator-79b997595-mglqp" Dec 12 15:54:18 crc kubenswrapper[4693]: I1212 15:54:18.924524 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kdpb\" (UniqueName: \"kubernetes.io/projected/fef1de87-a0ab-4a6e-9b37-d446cf2ec47e-kube-api-access-4kdpb\") pod \"marketplace-operator-79b997595-mglqp\" (UID: \"fef1de87-a0ab-4a6e-9b37-d446cf2ec47e\") " pod="openshift-marketplace/marketplace-operator-79b997595-mglqp" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.025662 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fef1de87-a0ab-4a6e-9b37-d446cf2ec47e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mglqp\" (UID: \"fef1de87-a0ab-4a6e-9b37-d446cf2ec47e\") " pod="openshift-marketplace/marketplace-operator-79b997595-mglqp" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.025740 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fef1de87-a0ab-4a6e-9b37-d446cf2ec47e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mglqp\" (UID: \"fef1de87-a0ab-4a6e-9b37-d446cf2ec47e\") " pod="openshift-marketplace/marketplace-operator-79b997595-mglqp" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.025775 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kdpb\" (UniqueName: \"kubernetes.io/projected/fef1de87-a0ab-4a6e-9b37-d446cf2ec47e-kube-api-access-4kdpb\") pod \"marketplace-operator-79b997595-mglqp\" (UID: \"fef1de87-a0ab-4a6e-9b37-d446cf2ec47e\") " pod="openshift-marketplace/marketplace-operator-79b997595-mglqp" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.026987 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fef1de87-a0ab-4a6e-9b37-d446cf2ec47e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mglqp\" (UID: \"fef1de87-a0ab-4a6e-9b37-d446cf2ec47e\") " pod="openshift-marketplace/marketplace-operator-79b997595-mglqp" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.031987 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fef1de87-a0ab-4a6e-9b37-d446cf2ec47e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mglqp\" (UID: \"fef1de87-a0ab-4a6e-9b37-d446cf2ec47e\") " pod="openshift-marketplace/marketplace-operator-79b997595-mglqp" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.044685 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kdpb\" (UniqueName: \"kubernetes.io/projected/fef1de87-a0ab-4a6e-9b37-d446cf2ec47e-kube-api-access-4kdpb\") pod \"marketplace-operator-79b997595-mglqp\" (UID: \"fef1de87-a0ab-4a6e-9b37-d446cf2ec47e\") " pod="openshift-marketplace/marketplace-operator-79b997595-mglqp" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.259029 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mglqp" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.263927 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9pf7" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.268798 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.276733 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvk2k" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.287120 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4lmj" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.295139 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmcqt" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.433484 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cce9d41-da95-4956-bdb8-f234c2f96bac-catalog-content\") pod \"7cce9d41-da95-4956-bdb8-f234c2f96bac\" (UID: \"7cce9d41-da95-4956-bdb8-f234c2f96bac\") " Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.433545 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b1c4746-f772-49d8-be11-9abc850ea7e2-marketplace-trusted-ca\") pod \"7b1c4746-f772-49d8-be11-9abc850ea7e2\" (UID: \"7b1c4746-f772-49d8-be11-9abc850ea7e2\") " Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.433579 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj7lj\" (UniqueName: \"kubernetes.io/projected/7cce9d41-da95-4956-bdb8-f234c2f96bac-kube-api-access-gj7lj\") pod \"7cce9d41-da95-4956-bdb8-f234c2f96bac\" (UID: \"7cce9d41-da95-4956-bdb8-f234c2f96bac\") " Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.433603 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1790176a-e8f5-4490-b020-53392f0475cc-catalog-content\") pod \"1790176a-e8f5-4490-b020-53392f0475cc\" (UID: \"1790176a-e8f5-4490-b020-53392f0475cc\") " Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.433619 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjwsj\" (UniqueName: \"kubernetes.io/projected/7b1c4746-f772-49d8-be11-9abc850ea7e2-kube-api-access-kjwsj\") pod \"7b1c4746-f772-49d8-be11-9abc850ea7e2\" (UID: \"7b1c4746-f772-49d8-be11-9abc850ea7e2\") " Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.433670 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cce9d41-da95-4956-bdb8-f234c2f96bac-utilities\") pod \"7cce9d41-da95-4956-bdb8-f234c2f96bac\" (UID: \"7cce9d41-da95-4956-bdb8-f234c2f96bac\") " Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.433686 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e35b458b-b638-4684-8f5b-bcf2d0cf692f-utilities\") pod \"e35b458b-b638-4684-8f5b-bcf2d0cf692f\" (UID: \"e35b458b-b638-4684-8f5b-bcf2d0cf692f\") " Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.433716 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a12f193b-21da-485e-a825-03f5bd5070b1-utilities\") pod \"a12f193b-21da-485e-a825-03f5bd5070b1\" (UID: \"a12f193b-21da-485e-a825-03f5bd5070b1\") " Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.433762 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a12f193b-21da-485e-a825-03f5bd5070b1-catalog-content\") pod \"a12f193b-21da-485e-a825-03f5bd5070b1\" (UID: \"a12f193b-21da-485e-a825-03f5bd5070b1\") " Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.433780 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlpgc\" (UniqueName: \"kubernetes.io/projected/e35b458b-b638-4684-8f5b-bcf2d0cf692f-kube-api-access-dlpgc\") pod \"e35b458b-b638-4684-8f5b-bcf2d0cf692f\" (UID: \"e35b458b-b638-4684-8f5b-bcf2d0cf692f\") " Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.433794 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e35b458b-b638-4684-8f5b-bcf2d0cf692f-catalog-content\") pod \"e35b458b-b638-4684-8f5b-bcf2d0cf692f\" (UID: \"e35b458b-b638-4684-8f5b-bcf2d0cf692f\") " Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.433830 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b1c4746-f772-49d8-be11-9abc850ea7e2-marketplace-operator-metrics\") pod \"7b1c4746-f772-49d8-be11-9abc850ea7e2\" (UID: \"7b1c4746-f772-49d8-be11-9abc850ea7e2\") " Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.433868 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb757\" (UniqueName: \"kubernetes.io/projected/a12f193b-21da-485e-a825-03f5bd5070b1-kube-api-access-bb757\") pod \"a12f193b-21da-485e-a825-03f5bd5070b1\" (UID: \"a12f193b-21da-485e-a825-03f5bd5070b1\") " Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.433904 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2z4s\" (UniqueName: \"kubernetes.io/projected/1790176a-e8f5-4490-b020-53392f0475cc-kube-api-access-s2z4s\") pod \"1790176a-e8f5-4490-b020-53392f0475cc\" (UID: \"1790176a-e8f5-4490-b020-53392f0475cc\") " Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.433924 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1790176a-e8f5-4490-b020-53392f0475cc-utilities\") pod \"1790176a-e8f5-4490-b020-53392f0475cc\" (UID: \"1790176a-e8f5-4490-b020-53392f0475cc\") " Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.435170 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1790176a-e8f5-4490-b020-53392f0475cc-utilities" (OuterVolumeSpecName: "utilities") pod "1790176a-e8f5-4490-b020-53392f0475cc" (UID: "1790176a-e8f5-4490-b020-53392f0475cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.436763 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e35b458b-b638-4684-8f5b-bcf2d0cf692f-utilities" (OuterVolumeSpecName: "utilities") pod "e35b458b-b638-4684-8f5b-bcf2d0cf692f" (UID: "e35b458b-b638-4684-8f5b-bcf2d0cf692f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.441094 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a12f193b-21da-485e-a825-03f5bd5070b1-utilities" (OuterVolumeSpecName: "utilities") pod "a12f193b-21da-485e-a825-03f5bd5070b1" (UID: "a12f193b-21da-485e-a825-03f5bd5070b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.441638 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cce9d41-da95-4956-bdb8-f234c2f96bac-utilities" (OuterVolumeSpecName: "utilities") pod "7cce9d41-da95-4956-bdb8-f234c2f96bac" (UID: "7cce9d41-da95-4956-bdb8-f234c2f96bac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.443844 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b1c4746-f772-49d8-be11-9abc850ea7e2-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "7b1c4746-f772-49d8-be11-9abc850ea7e2" (UID: "7b1c4746-f772-49d8-be11-9abc850ea7e2"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.444457 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1790176a-e8f5-4490-b020-53392f0475cc-kube-api-access-s2z4s" (OuterVolumeSpecName: "kube-api-access-s2z4s") pod "1790176a-e8f5-4490-b020-53392f0475cc" (UID: "1790176a-e8f5-4490-b020-53392f0475cc"). InnerVolumeSpecName "kube-api-access-s2z4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.444477 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cce9d41-da95-4956-bdb8-f234c2f96bac-kube-api-access-gj7lj" (OuterVolumeSpecName: "kube-api-access-gj7lj") pod "7cce9d41-da95-4956-bdb8-f234c2f96bac" (UID: "7cce9d41-da95-4956-bdb8-f234c2f96bac"). InnerVolumeSpecName "kube-api-access-gj7lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.444494 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a12f193b-21da-485e-a825-03f5bd5070b1-kube-api-access-bb757" (OuterVolumeSpecName: "kube-api-access-bb757") pod "a12f193b-21da-485e-a825-03f5bd5070b1" (UID: "a12f193b-21da-485e-a825-03f5bd5070b1"). InnerVolumeSpecName "kube-api-access-bb757". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.444586 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e35b458b-b638-4684-8f5b-bcf2d0cf692f-kube-api-access-dlpgc" (OuterVolumeSpecName: "kube-api-access-dlpgc") pod "e35b458b-b638-4684-8f5b-bcf2d0cf692f" (UID: "e35b458b-b638-4684-8f5b-bcf2d0cf692f"). InnerVolumeSpecName "kube-api-access-dlpgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.445746 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1c4746-f772-49d8-be11-9abc850ea7e2-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "7b1c4746-f772-49d8-be11-9abc850ea7e2" (UID: "7b1c4746-f772-49d8-be11-9abc850ea7e2"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.447795 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b1c4746-f772-49d8-be11-9abc850ea7e2-kube-api-access-kjwsj" (OuterVolumeSpecName: "kube-api-access-kjwsj") pod "7b1c4746-f772-49d8-be11-9abc850ea7e2" (UID: "7b1c4746-f772-49d8-be11-9abc850ea7e2"). InnerVolumeSpecName "kube-api-access-kjwsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.486950 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1790176a-e8f5-4490-b020-53392f0475cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1790176a-e8f5-4490-b020-53392f0475cc" (UID: "1790176a-e8f5-4490-b020-53392f0475cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.494515 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e35b458b-b638-4684-8f5b-bcf2d0cf692f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e35b458b-b638-4684-8f5b-bcf2d0cf692f" (UID: "e35b458b-b638-4684-8f5b-bcf2d0cf692f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.509053 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a12f193b-21da-485e-a825-03f5bd5070b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a12f193b-21da-485e-a825-03f5bd5070b1" (UID: "a12f193b-21da-485e-a825-03f5bd5070b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.535108 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb757\" (UniqueName: \"kubernetes.io/projected/a12f193b-21da-485e-a825-03f5bd5070b1-kube-api-access-bb757\") on node \"crc\" DevicePath \"\"" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.535147 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2z4s\" (UniqueName: \"kubernetes.io/projected/1790176a-e8f5-4490-b020-53392f0475cc-kube-api-access-s2z4s\") on node \"crc\" DevicePath \"\"" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.535162 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1790176a-e8f5-4490-b020-53392f0475cc-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.535178 4693 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b1c4746-f772-49d8-be11-9abc850ea7e2-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.535190 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj7lj\" (UniqueName: \"kubernetes.io/projected/7cce9d41-da95-4956-bdb8-f234c2f96bac-kube-api-access-gj7lj\") on node \"crc\" DevicePath \"\"" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.535200 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1790176a-e8f5-4490-b020-53392f0475cc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.535209 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjwsj\" (UniqueName: \"kubernetes.io/projected/7b1c4746-f772-49d8-be11-9abc850ea7e2-kube-api-access-kjwsj\") on node \"crc\" DevicePath \"\"" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.535221 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cce9d41-da95-4956-bdb8-f234c2f96bac-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.535231 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e35b458b-b638-4684-8f5b-bcf2d0cf692f-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.535241 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a12f193b-21da-485e-a825-03f5bd5070b1-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.535251 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a12f193b-21da-485e-a825-03f5bd5070b1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.535262 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e35b458b-b638-4684-8f5b-bcf2d0cf692f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.535294 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlpgc\" (UniqueName: \"kubernetes.io/projected/e35b458b-b638-4684-8f5b-bcf2d0cf692f-kube-api-access-dlpgc\") on node \"crc\" DevicePath \"\"" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.535308 4693 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b1c4746-f772-49d8-be11-9abc850ea7e2-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.551265 4693 generic.go:334] "Generic (PLEG): container finished" podID="a12f193b-21da-485e-a825-03f5bd5070b1" containerID="e97bb28cd0d861b09997b0d258a8c2d03c90fd82ff7e726fff922138bf023191" exitCode=0 Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.551321 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvk2k" event={"ID":"a12f193b-21da-485e-a825-03f5bd5070b1","Type":"ContainerDied","Data":"e97bb28cd0d861b09997b0d258a8c2d03c90fd82ff7e726fff922138bf023191"} Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.551350 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvk2k" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.551380 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvk2k" event={"ID":"a12f193b-21da-485e-a825-03f5bd5070b1","Type":"ContainerDied","Data":"3a3bc0a85090b3a022ddbdc07d3081f20d7ec4f31dd23c9d61e9c963bb7f71cc"} Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.551425 4693 scope.go:117] "RemoveContainer" containerID="e97bb28cd0d861b09997b0d258a8c2d03c90fd82ff7e726fff922138bf023191" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.554788 4693 generic.go:334] "Generic (PLEG): container finished" podID="e35b458b-b638-4684-8f5b-bcf2d0cf692f" containerID="5fb0864b6220c1237c6b82b3ef2a2326262cfc24254f5127241e85a9be74fc05" exitCode=0 Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.554940 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9pf7" event={"ID":"e35b458b-b638-4684-8f5b-bcf2d0cf692f","Type":"ContainerDied","Data":"5fb0864b6220c1237c6b82b3ef2a2326262cfc24254f5127241e85a9be74fc05"} Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.555035 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9pf7" event={"ID":"e35b458b-b638-4684-8f5b-bcf2d0cf692f","Type":"ContainerDied","Data":"631d8f9923934a95c73f9e5e84c6d62ee80be3f0b6e55d66134846c8e509136c"} Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.555057 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9pf7" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.558010 4693 generic.go:334] "Generic (PLEG): container finished" podID="1790176a-e8f5-4490-b020-53392f0475cc" containerID="3524de0d28f3353e735b1679611a0661782e5f8de7f62f3c99bd35558958631e" exitCode=0 Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.558059 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4lmj" event={"ID":"1790176a-e8f5-4490-b020-53392f0475cc","Type":"ContainerDied","Data":"3524de0d28f3353e735b1679611a0661782e5f8de7f62f3c99bd35558958631e"} Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.558075 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4lmj" event={"ID":"1790176a-e8f5-4490-b020-53392f0475cc","Type":"ContainerDied","Data":"54805fb427b9dae2da0b62441319bcb59f98394fe60f2e2577605981e16645a7"} Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.558191 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4lmj" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.567512 4693 generic.go:334] "Generic (PLEG): container finished" podID="7b1c4746-f772-49d8-be11-9abc850ea7e2" containerID="1d1f1c715f9ccd1a4813d54250ca62c09726b8a742de00ccb60a1693b4826edf" exitCode=0 Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.567581 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" event={"ID":"7b1c4746-f772-49d8-be11-9abc850ea7e2","Type":"ContainerDied","Data":"1d1f1c715f9ccd1a4813d54250ca62c09726b8a742de00ccb60a1693b4826edf"} Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.567606 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" event={"ID":"7b1c4746-f772-49d8-be11-9abc850ea7e2","Type":"ContainerDied","Data":"7ca443e5dbb51dff5a7fd4d4704e9349bc48ab8c2073afe678d2afe7fb0a3d00"} Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.567684 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-npwzs" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.567875 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cce9d41-da95-4956-bdb8-f234c2f96bac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7cce9d41-da95-4956-bdb8-f234c2f96bac" (UID: "7cce9d41-da95-4956-bdb8-f234c2f96bac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.569565 4693 generic.go:334] "Generic (PLEG): container finished" podID="7cce9d41-da95-4956-bdb8-f234c2f96bac" containerID="5a6d9d11149436685192e96aa236eeaf60aeff9528605ea6dd85a485728d4980" exitCode=0 Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.569594 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmcqt" event={"ID":"7cce9d41-da95-4956-bdb8-f234c2f96bac","Type":"ContainerDied","Data":"5a6d9d11149436685192e96aa236eeaf60aeff9528605ea6dd85a485728d4980"} Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.569612 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmcqt" event={"ID":"7cce9d41-da95-4956-bdb8-f234c2f96bac","Type":"ContainerDied","Data":"fa2d3492e64faead34dfae559ed43d14440227b69eff978eda7e49cc1aafd484"} Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.569679 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmcqt" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.572687 4693 scope.go:117] "RemoveContainer" containerID="5e0f954137266b19ffa37e81a62b858692fbf293323495ae8ea1460a06ab73bd" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.589611 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fvk2k"] Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.592600 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fvk2k"] Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.605354 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4lmj"] Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.608360 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4lmj"] Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.616208 4693 scope.go:117] "RemoveContainer" containerID="7b7be798167610ad2561ec6eff87c2ed663fef45f377ee38ab65a8cbf05ffbd8" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.623510 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v9pf7"] Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.630192 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v9pf7"] Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.633210 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zmcqt"] Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.636195 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cce9d41-da95-4956-bdb8-f234c2f96bac-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.639018 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zmcqt"] Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.643339 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-npwzs"] Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.647499 4693 scope.go:117] "RemoveContainer" containerID="e97bb28cd0d861b09997b0d258a8c2d03c90fd82ff7e726fff922138bf023191" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.647541 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-npwzs"] Dec 12 15:54:19 crc kubenswrapper[4693]: E1212 15:54:19.647913 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e97bb28cd0d861b09997b0d258a8c2d03c90fd82ff7e726fff922138bf023191\": container with ID starting with e97bb28cd0d861b09997b0d258a8c2d03c90fd82ff7e726fff922138bf023191 not found: ID does not exist" containerID="e97bb28cd0d861b09997b0d258a8c2d03c90fd82ff7e726fff922138bf023191" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.647950 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e97bb28cd0d861b09997b0d258a8c2d03c90fd82ff7e726fff922138bf023191"} err="failed to get container status \"e97bb28cd0d861b09997b0d258a8c2d03c90fd82ff7e726fff922138bf023191\": rpc error: code = NotFound desc = could not find container \"e97bb28cd0d861b09997b0d258a8c2d03c90fd82ff7e726fff922138bf023191\": container with ID starting with e97bb28cd0d861b09997b0d258a8c2d03c90fd82ff7e726fff922138bf023191 not found: ID does not exist" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.647979 4693 scope.go:117] "RemoveContainer" containerID="5e0f954137266b19ffa37e81a62b858692fbf293323495ae8ea1460a06ab73bd" Dec 12 15:54:19 crc kubenswrapper[4693]: E1212 15:54:19.648292 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e0f954137266b19ffa37e81a62b858692fbf293323495ae8ea1460a06ab73bd\": container with ID starting with 5e0f954137266b19ffa37e81a62b858692fbf293323495ae8ea1460a06ab73bd not found: ID does not exist" containerID="5e0f954137266b19ffa37e81a62b858692fbf293323495ae8ea1460a06ab73bd" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.648342 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e0f954137266b19ffa37e81a62b858692fbf293323495ae8ea1460a06ab73bd"} err="failed to get container status \"5e0f954137266b19ffa37e81a62b858692fbf293323495ae8ea1460a06ab73bd\": rpc error: code = NotFound desc = could not find container \"5e0f954137266b19ffa37e81a62b858692fbf293323495ae8ea1460a06ab73bd\": container with ID starting with 5e0f954137266b19ffa37e81a62b858692fbf293323495ae8ea1460a06ab73bd not found: ID does not exist" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.648372 4693 scope.go:117] "RemoveContainer" containerID="7b7be798167610ad2561ec6eff87c2ed663fef45f377ee38ab65a8cbf05ffbd8" Dec 12 15:54:19 crc kubenswrapper[4693]: E1212 15:54:19.648713 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b7be798167610ad2561ec6eff87c2ed663fef45f377ee38ab65a8cbf05ffbd8\": container with ID starting with 7b7be798167610ad2561ec6eff87c2ed663fef45f377ee38ab65a8cbf05ffbd8 not found: ID does not exist" containerID="7b7be798167610ad2561ec6eff87c2ed663fef45f377ee38ab65a8cbf05ffbd8" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.648749 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b7be798167610ad2561ec6eff87c2ed663fef45f377ee38ab65a8cbf05ffbd8"} err="failed to get container status \"7b7be798167610ad2561ec6eff87c2ed663fef45f377ee38ab65a8cbf05ffbd8\": rpc error: code = NotFound desc = could not find container \"7b7be798167610ad2561ec6eff87c2ed663fef45f377ee38ab65a8cbf05ffbd8\": container with ID starting with 7b7be798167610ad2561ec6eff87c2ed663fef45f377ee38ab65a8cbf05ffbd8 not found: ID does not exist" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.648774 4693 scope.go:117] "RemoveContainer" containerID="5fb0864b6220c1237c6b82b3ef2a2326262cfc24254f5127241e85a9be74fc05" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.661738 4693 scope.go:117] "RemoveContainer" containerID="98fa9939f95bbc3e9eb316d7f5c3cb9254b0c8caf448502ce9b0f7099e4151de" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.673956 4693 scope.go:117] "RemoveContainer" containerID="822f2009737fc33742b968d24495693962e99d5f16282120857d0389f38c2eb9" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.678930 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mglqp"] Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.689567 4693 scope.go:117] "RemoveContainer" containerID="5fb0864b6220c1237c6b82b3ef2a2326262cfc24254f5127241e85a9be74fc05" Dec 12 15:54:19 crc kubenswrapper[4693]: E1212 15:54:19.689944 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fb0864b6220c1237c6b82b3ef2a2326262cfc24254f5127241e85a9be74fc05\": container with ID starting with 5fb0864b6220c1237c6b82b3ef2a2326262cfc24254f5127241e85a9be74fc05 not found: ID does not exist" containerID="5fb0864b6220c1237c6b82b3ef2a2326262cfc24254f5127241e85a9be74fc05" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.689981 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fb0864b6220c1237c6b82b3ef2a2326262cfc24254f5127241e85a9be74fc05"} err="failed to get container status \"5fb0864b6220c1237c6b82b3ef2a2326262cfc24254f5127241e85a9be74fc05\": rpc error: code = NotFound desc = could not find container \"5fb0864b6220c1237c6b82b3ef2a2326262cfc24254f5127241e85a9be74fc05\": container with ID starting with 5fb0864b6220c1237c6b82b3ef2a2326262cfc24254f5127241e85a9be74fc05 not found: ID does not exist" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.690005 4693 scope.go:117] "RemoveContainer" containerID="98fa9939f95bbc3e9eb316d7f5c3cb9254b0c8caf448502ce9b0f7099e4151de" Dec 12 15:54:19 crc kubenswrapper[4693]: E1212 15:54:19.690419 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98fa9939f95bbc3e9eb316d7f5c3cb9254b0c8caf448502ce9b0f7099e4151de\": container with ID starting with 98fa9939f95bbc3e9eb316d7f5c3cb9254b0c8caf448502ce9b0f7099e4151de not found: ID does not exist" containerID="98fa9939f95bbc3e9eb316d7f5c3cb9254b0c8caf448502ce9b0f7099e4151de" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.690457 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98fa9939f95bbc3e9eb316d7f5c3cb9254b0c8caf448502ce9b0f7099e4151de"} err="failed to get container status \"98fa9939f95bbc3e9eb316d7f5c3cb9254b0c8caf448502ce9b0f7099e4151de\": rpc error: code = NotFound desc = could not find container \"98fa9939f95bbc3e9eb316d7f5c3cb9254b0c8caf448502ce9b0f7099e4151de\": container with ID starting with 98fa9939f95bbc3e9eb316d7f5c3cb9254b0c8caf448502ce9b0f7099e4151de not found: ID does not exist" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.690484 4693 scope.go:117] "RemoveContainer" containerID="822f2009737fc33742b968d24495693962e99d5f16282120857d0389f38c2eb9" Dec 12 15:54:19 crc kubenswrapper[4693]: E1212 15:54:19.691666 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"822f2009737fc33742b968d24495693962e99d5f16282120857d0389f38c2eb9\": container with ID starting with 822f2009737fc33742b968d24495693962e99d5f16282120857d0389f38c2eb9 not found: ID does not exist" containerID="822f2009737fc33742b968d24495693962e99d5f16282120857d0389f38c2eb9" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.691704 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"822f2009737fc33742b968d24495693962e99d5f16282120857d0389f38c2eb9"} err="failed to get container status \"822f2009737fc33742b968d24495693962e99d5f16282120857d0389f38c2eb9\": rpc error: code = NotFound desc = could not find container \"822f2009737fc33742b968d24495693962e99d5f16282120857d0389f38c2eb9\": container with ID starting with 822f2009737fc33742b968d24495693962e99d5f16282120857d0389f38c2eb9 not found: ID does not exist" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.691732 4693 scope.go:117] "RemoveContainer" containerID="3524de0d28f3353e735b1679611a0661782e5f8de7f62f3c99bd35558958631e" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.705942 4693 scope.go:117] "RemoveContainer" containerID="c6beaeaff16c4e3569d403b9f74e046ca48fa1f56b83a0ee849688e2f5ef8513" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.721621 4693 scope.go:117] "RemoveContainer" containerID="ba8311edd6132fca55adeead29771e6074d98ee54eabcd61aa907f0e97850c9c" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.740695 4693 scope.go:117] "RemoveContainer" containerID="3524de0d28f3353e735b1679611a0661782e5f8de7f62f3c99bd35558958631e" Dec 12 15:54:19 crc kubenswrapper[4693]: E1212 15:54:19.741355 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3524de0d28f3353e735b1679611a0661782e5f8de7f62f3c99bd35558958631e\": container with ID starting with 3524de0d28f3353e735b1679611a0661782e5f8de7f62f3c99bd35558958631e not found: ID does not exist" containerID="3524de0d28f3353e735b1679611a0661782e5f8de7f62f3c99bd35558958631e" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.741387 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3524de0d28f3353e735b1679611a0661782e5f8de7f62f3c99bd35558958631e"} err="failed to get container status \"3524de0d28f3353e735b1679611a0661782e5f8de7f62f3c99bd35558958631e\": rpc error: code = NotFound desc = could not find container \"3524de0d28f3353e735b1679611a0661782e5f8de7f62f3c99bd35558958631e\": container with ID starting with 3524de0d28f3353e735b1679611a0661782e5f8de7f62f3c99bd35558958631e not found: ID does not exist" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.741409 4693 scope.go:117] "RemoveContainer" containerID="c6beaeaff16c4e3569d403b9f74e046ca48fa1f56b83a0ee849688e2f5ef8513" Dec 12 15:54:19 crc kubenswrapper[4693]: E1212 15:54:19.742913 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6beaeaff16c4e3569d403b9f74e046ca48fa1f56b83a0ee849688e2f5ef8513\": container with ID starting with c6beaeaff16c4e3569d403b9f74e046ca48fa1f56b83a0ee849688e2f5ef8513 not found: ID does not exist" containerID="c6beaeaff16c4e3569d403b9f74e046ca48fa1f56b83a0ee849688e2f5ef8513" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.742938 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6beaeaff16c4e3569d403b9f74e046ca48fa1f56b83a0ee849688e2f5ef8513"} err="failed to get container status \"c6beaeaff16c4e3569d403b9f74e046ca48fa1f56b83a0ee849688e2f5ef8513\": rpc error: code = NotFound desc = could not find container \"c6beaeaff16c4e3569d403b9f74e046ca48fa1f56b83a0ee849688e2f5ef8513\": container with ID starting with c6beaeaff16c4e3569d403b9f74e046ca48fa1f56b83a0ee849688e2f5ef8513 not found: ID does not exist" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.742952 4693 scope.go:117] "RemoveContainer" containerID="ba8311edd6132fca55adeead29771e6074d98ee54eabcd61aa907f0e97850c9c" Dec 12 15:54:19 crc kubenswrapper[4693]: E1212 15:54:19.743155 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba8311edd6132fca55adeead29771e6074d98ee54eabcd61aa907f0e97850c9c\": container with ID starting with ba8311edd6132fca55adeead29771e6074d98ee54eabcd61aa907f0e97850c9c not found: ID does not exist" containerID="ba8311edd6132fca55adeead29771e6074d98ee54eabcd61aa907f0e97850c9c" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.743178 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba8311edd6132fca55adeead29771e6074d98ee54eabcd61aa907f0e97850c9c"} err="failed to get container status \"ba8311edd6132fca55adeead29771e6074d98ee54eabcd61aa907f0e97850c9c\": rpc error: code = NotFound desc = could not find container \"ba8311edd6132fca55adeead29771e6074d98ee54eabcd61aa907f0e97850c9c\": container with ID starting with ba8311edd6132fca55adeead29771e6074d98ee54eabcd61aa907f0e97850c9c not found: ID does not exist" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.743191 4693 scope.go:117] "RemoveContainer" containerID="1d1f1c715f9ccd1a4813d54250ca62c09726b8a742de00ccb60a1693b4826edf" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.772648 4693 scope.go:117] "RemoveContainer" containerID="1d1f1c715f9ccd1a4813d54250ca62c09726b8a742de00ccb60a1693b4826edf" Dec 12 15:54:19 crc kubenswrapper[4693]: E1212 15:54:19.773186 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d1f1c715f9ccd1a4813d54250ca62c09726b8a742de00ccb60a1693b4826edf\": container with ID starting with 1d1f1c715f9ccd1a4813d54250ca62c09726b8a742de00ccb60a1693b4826edf not found: ID does not exist" containerID="1d1f1c715f9ccd1a4813d54250ca62c09726b8a742de00ccb60a1693b4826edf" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.773539 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d1f1c715f9ccd1a4813d54250ca62c09726b8a742de00ccb60a1693b4826edf"} err="failed to get container status \"1d1f1c715f9ccd1a4813d54250ca62c09726b8a742de00ccb60a1693b4826edf\": rpc error: code = NotFound desc = could not find container \"1d1f1c715f9ccd1a4813d54250ca62c09726b8a742de00ccb60a1693b4826edf\": container with ID starting with 1d1f1c715f9ccd1a4813d54250ca62c09726b8a742de00ccb60a1693b4826edf not found: ID does not exist" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.773568 4693 scope.go:117] "RemoveContainer" containerID="5a6d9d11149436685192e96aa236eeaf60aeff9528605ea6dd85a485728d4980" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.790353 4693 scope.go:117] "RemoveContainer" containerID="4bffb2ee83587f4d98be94bf88c65760f8edcfd458ff94dbb3ceca3f528cfdee" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.819669 4693 scope.go:117] "RemoveContainer" containerID="bfcc60937ad359244f2cc7f93df6cab1931be45ff4ff2e4470aae77646bc5de9" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.837469 4693 scope.go:117] "RemoveContainer" containerID="5a6d9d11149436685192e96aa236eeaf60aeff9528605ea6dd85a485728d4980" Dec 12 15:54:19 crc kubenswrapper[4693]: E1212 15:54:19.838229 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a6d9d11149436685192e96aa236eeaf60aeff9528605ea6dd85a485728d4980\": container with ID starting with 5a6d9d11149436685192e96aa236eeaf60aeff9528605ea6dd85a485728d4980 not found: ID does not exist" containerID="5a6d9d11149436685192e96aa236eeaf60aeff9528605ea6dd85a485728d4980" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.838259 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a6d9d11149436685192e96aa236eeaf60aeff9528605ea6dd85a485728d4980"} err="failed to get container status \"5a6d9d11149436685192e96aa236eeaf60aeff9528605ea6dd85a485728d4980\": rpc error: code = NotFound desc = could not find container \"5a6d9d11149436685192e96aa236eeaf60aeff9528605ea6dd85a485728d4980\": container with ID starting with 5a6d9d11149436685192e96aa236eeaf60aeff9528605ea6dd85a485728d4980 not found: ID does not exist" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.838322 4693 scope.go:117] "RemoveContainer" containerID="4bffb2ee83587f4d98be94bf88c65760f8edcfd458ff94dbb3ceca3f528cfdee" Dec 12 15:54:19 crc kubenswrapper[4693]: E1212 15:54:19.838642 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bffb2ee83587f4d98be94bf88c65760f8edcfd458ff94dbb3ceca3f528cfdee\": container with ID starting with 4bffb2ee83587f4d98be94bf88c65760f8edcfd458ff94dbb3ceca3f528cfdee not found: ID does not exist" containerID="4bffb2ee83587f4d98be94bf88c65760f8edcfd458ff94dbb3ceca3f528cfdee" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.838672 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bffb2ee83587f4d98be94bf88c65760f8edcfd458ff94dbb3ceca3f528cfdee"} err="failed to get container status \"4bffb2ee83587f4d98be94bf88c65760f8edcfd458ff94dbb3ceca3f528cfdee\": rpc error: code = NotFound desc = could not find container \"4bffb2ee83587f4d98be94bf88c65760f8edcfd458ff94dbb3ceca3f528cfdee\": container with ID starting with 4bffb2ee83587f4d98be94bf88c65760f8edcfd458ff94dbb3ceca3f528cfdee not found: ID does not exist" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.838713 4693 scope.go:117] "RemoveContainer" containerID="bfcc60937ad359244f2cc7f93df6cab1931be45ff4ff2e4470aae77646bc5de9" Dec 12 15:54:19 crc kubenswrapper[4693]: E1212 15:54:19.839089 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfcc60937ad359244f2cc7f93df6cab1931be45ff4ff2e4470aae77646bc5de9\": container with ID starting with bfcc60937ad359244f2cc7f93df6cab1931be45ff4ff2e4470aae77646bc5de9 not found: ID does not exist" containerID="bfcc60937ad359244f2cc7f93df6cab1931be45ff4ff2e4470aae77646bc5de9" Dec 12 15:54:19 crc kubenswrapper[4693]: I1212 15:54:19.839129 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfcc60937ad359244f2cc7f93df6cab1931be45ff4ff2e4470aae77646bc5de9"} err="failed to get container status \"bfcc60937ad359244f2cc7f93df6cab1931be45ff4ff2e4470aae77646bc5de9\": rpc error: code = NotFound desc = could not find container \"bfcc60937ad359244f2cc7f93df6cab1931be45ff4ff2e4470aae77646bc5de9\": container with ID starting with bfcc60937ad359244f2cc7f93df6cab1931be45ff4ff2e4470aae77646bc5de9 not found: ID does not exist" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.575817 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mglqp" event={"ID":"fef1de87-a0ab-4a6e-9b37-d446cf2ec47e","Type":"ContainerStarted","Data":"b04201ccdefdb484c3f124520c2b6adecbf46b09902c2bf20b48462e1aed24c8"} Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.575854 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mglqp" event={"ID":"fef1de87-a0ab-4a6e-9b37-d446cf2ec47e","Type":"ContainerStarted","Data":"1875479ae5f99b760a3a93a34f0d8e1f5e2a15cee26a05be13d9ae1b93855ee1"} Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.576188 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mglqp" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.580494 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mglqp" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.593156 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mglqp" podStartSLOduration=2.59313245 podStartE2EDuration="2.59313245s" podCreationTimestamp="2025-12-12 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:54:20.591378551 +0000 UTC m=+487.760018172" watchObservedRunningTime="2025-12-12 15:54:20.59313245 +0000 UTC m=+487.761772051" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.780713 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q5jh6"] Dec 12 15:54:20 crc kubenswrapper[4693]: E1212 15:54:20.780921 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e35b458b-b638-4684-8f5b-bcf2d0cf692f" containerName="extract-content" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.780933 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="e35b458b-b638-4684-8f5b-bcf2d0cf692f" containerName="extract-content" Dec 12 15:54:20 crc kubenswrapper[4693]: E1212 15:54:20.780942 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1790176a-e8f5-4490-b020-53392f0475cc" containerName="registry-server" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.780948 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="1790176a-e8f5-4490-b020-53392f0475cc" containerName="registry-server" Dec 12 15:54:20 crc kubenswrapper[4693]: E1212 15:54:20.780958 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b1c4746-f772-49d8-be11-9abc850ea7e2" containerName="marketplace-operator" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.780964 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b1c4746-f772-49d8-be11-9abc850ea7e2" containerName="marketplace-operator" Dec 12 15:54:20 crc kubenswrapper[4693]: E1212 15:54:20.780974 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" containerName="registry-server" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.780980 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" containerName="registry-server" Dec 12 15:54:20 crc kubenswrapper[4693]: E1212 15:54:20.780986 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e35b458b-b638-4684-8f5b-bcf2d0cf692f" containerName="registry-server" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.780991 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="e35b458b-b638-4684-8f5b-bcf2d0cf692f" containerName="registry-server" Dec 12 15:54:20 crc kubenswrapper[4693]: E1212 15:54:20.781000 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1790176a-e8f5-4490-b020-53392f0475cc" containerName="extract-utilities" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.781006 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="1790176a-e8f5-4490-b020-53392f0475cc" containerName="extract-utilities" Dec 12 15:54:20 crc kubenswrapper[4693]: E1212 15:54:20.781013 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1790176a-e8f5-4490-b020-53392f0475cc" containerName="extract-content" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.781018 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="1790176a-e8f5-4490-b020-53392f0475cc" containerName="extract-content" Dec 12 15:54:20 crc kubenswrapper[4693]: E1212 15:54:20.781028 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e35b458b-b638-4684-8f5b-bcf2d0cf692f" containerName="extract-utilities" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.781034 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="e35b458b-b638-4684-8f5b-bcf2d0cf692f" containerName="extract-utilities" Dec 12 15:54:20 crc kubenswrapper[4693]: E1212 15:54:20.781040 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" containerName="extract-content" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.781046 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" containerName="extract-content" Dec 12 15:54:20 crc kubenswrapper[4693]: E1212 15:54:20.781053 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" containerName="extract-utilities" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.781059 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" containerName="extract-utilities" Dec 12 15:54:20 crc kubenswrapper[4693]: E1212 15:54:20.781068 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" containerName="registry-server" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.781074 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" containerName="registry-server" Dec 12 15:54:20 crc kubenswrapper[4693]: E1212 15:54:20.781080 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" containerName="extract-content" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.781087 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" containerName="extract-content" Dec 12 15:54:20 crc kubenswrapper[4693]: E1212 15:54:20.781094 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" containerName="extract-utilities" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.781099 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" containerName="extract-utilities" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.781180 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" containerName="registry-server" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.781190 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="e35b458b-b638-4684-8f5b-bcf2d0cf692f" containerName="registry-server" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.781197 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" containerName="registry-server" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.781206 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b1c4746-f772-49d8-be11-9abc850ea7e2" containerName="marketplace-operator" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.781213 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="1790176a-e8f5-4490-b020-53392f0475cc" containerName="registry-server" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.781988 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5jh6" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.786014 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.788289 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q5jh6"] Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.951576 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dabbc2a-64f3-4f25-8b65-17ed75c51801-catalog-content\") pod \"certified-operators-q5jh6\" (UID: \"2dabbc2a-64f3-4f25-8b65-17ed75c51801\") " pod="openshift-marketplace/certified-operators-q5jh6" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.951637 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dabbc2a-64f3-4f25-8b65-17ed75c51801-utilities\") pod \"certified-operators-q5jh6\" (UID: \"2dabbc2a-64f3-4f25-8b65-17ed75c51801\") " pod="openshift-marketplace/certified-operators-q5jh6" Dec 12 15:54:20 crc kubenswrapper[4693]: I1212 15:54:20.951668 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4977v\" (UniqueName: \"kubernetes.io/projected/2dabbc2a-64f3-4f25-8b65-17ed75c51801-kube-api-access-4977v\") pod \"certified-operators-q5jh6\" (UID: \"2dabbc2a-64f3-4f25-8b65-17ed75c51801\") " pod="openshift-marketplace/certified-operators-q5jh6" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.052784 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dabbc2a-64f3-4f25-8b65-17ed75c51801-catalog-content\") pod \"certified-operators-q5jh6\" (UID: \"2dabbc2a-64f3-4f25-8b65-17ed75c51801\") " pod="openshift-marketplace/certified-operators-q5jh6" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.052855 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dabbc2a-64f3-4f25-8b65-17ed75c51801-utilities\") pod \"certified-operators-q5jh6\" (UID: \"2dabbc2a-64f3-4f25-8b65-17ed75c51801\") " pod="openshift-marketplace/certified-operators-q5jh6" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.052902 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4977v\" (UniqueName: \"kubernetes.io/projected/2dabbc2a-64f3-4f25-8b65-17ed75c51801-kube-api-access-4977v\") pod \"certified-operators-q5jh6\" (UID: \"2dabbc2a-64f3-4f25-8b65-17ed75c51801\") " pod="openshift-marketplace/certified-operators-q5jh6" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.053345 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dabbc2a-64f3-4f25-8b65-17ed75c51801-catalog-content\") pod \"certified-operators-q5jh6\" (UID: \"2dabbc2a-64f3-4f25-8b65-17ed75c51801\") " pod="openshift-marketplace/certified-operators-q5jh6" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.055076 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dabbc2a-64f3-4f25-8b65-17ed75c51801-utilities\") pod \"certified-operators-q5jh6\" (UID: \"2dabbc2a-64f3-4f25-8b65-17ed75c51801\") " pod="openshift-marketplace/certified-operators-q5jh6" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.070951 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4977v\" (UniqueName: \"kubernetes.io/projected/2dabbc2a-64f3-4f25-8b65-17ed75c51801-kube-api-access-4977v\") pod \"certified-operators-q5jh6\" (UID: \"2dabbc2a-64f3-4f25-8b65-17ed75c51801\") " pod="openshift-marketplace/certified-operators-q5jh6" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.104622 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5jh6" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.334800 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q5jh6"] Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.386320 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1790176a-e8f5-4490-b020-53392f0475cc" path="/var/lib/kubelet/pods/1790176a-e8f5-4490-b020-53392f0475cc/volumes" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.387955 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b1c4746-f772-49d8-be11-9abc850ea7e2" path="/var/lib/kubelet/pods/7b1c4746-f772-49d8-be11-9abc850ea7e2/volumes" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.388485 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cce9d41-da95-4956-bdb8-f234c2f96bac" path="/var/lib/kubelet/pods/7cce9d41-da95-4956-bdb8-f234c2f96bac/volumes" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.389113 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a12f193b-21da-485e-a825-03f5bd5070b1" path="/var/lib/kubelet/pods/a12f193b-21da-485e-a825-03f5bd5070b1/volumes" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.390140 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e35b458b-b638-4684-8f5b-bcf2d0cf692f" path="/var/lib/kubelet/pods/e35b458b-b638-4684-8f5b-bcf2d0cf692f/volumes" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.390683 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sbjt9"] Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.392075 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sbjt9" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.396616 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sbjt9"] Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.397630 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.559566 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fv8g\" (UniqueName: \"kubernetes.io/projected/cabce219-6d9f-4aac-9402-ecf80e930f68-kube-api-access-7fv8g\") pod \"redhat-marketplace-sbjt9\" (UID: \"cabce219-6d9f-4aac-9402-ecf80e930f68\") " pod="openshift-marketplace/redhat-marketplace-sbjt9" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.560311 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cabce219-6d9f-4aac-9402-ecf80e930f68-utilities\") pod \"redhat-marketplace-sbjt9\" (UID: \"cabce219-6d9f-4aac-9402-ecf80e930f68\") " pod="openshift-marketplace/redhat-marketplace-sbjt9" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.560363 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cabce219-6d9f-4aac-9402-ecf80e930f68-catalog-content\") pod \"redhat-marketplace-sbjt9\" (UID: \"cabce219-6d9f-4aac-9402-ecf80e930f68\") " pod="openshift-marketplace/redhat-marketplace-sbjt9" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.589794 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5jh6" event={"ID":"2dabbc2a-64f3-4f25-8b65-17ed75c51801","Type":"ContainerStarted","Data":"92223bcb7966a58aea526fafd3ed0de244090b1400801927646930c555c7d33a"} Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.662238 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fv8g\" (UniqueName: \"kubernetes.io/projected/cabce219-6d9f-4aac-9402-ecf80e930f68-kube-api-access-7fv8g\") pod \"redhat-marketplace-sbjt9\" (UID: \"cabce219-6d9f-4aac-9402-ecf80e930f68\") " pod="openshift-marketplace/redhat-marketplace-sbjt9" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.662368 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cabce219-6d9f-4aac-9402-ecf80e930f68-utilities\") pod \"redhat-marketplace-sbjt9\" (UID: \"cabce219-6d9f-4aac-9402-ecf80e930f68\") " pod="openshift-marketplace/redhat-marketplace-sbjt9" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.662418 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cabce219-6d9f-4aac-9402-ecf80e930f68-catalog-content\") pod \"redhat-marketplace-sbjt9\" (UID: \"cabce219-6d9f-4aac-9402-ecf80e930f68\") " pod="openshift-marketplace/redhat-marketplace-sbjt9" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.663099 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cabce219-6d9f-4aac-9402-ecf80e930f68-utilities\") pod \"redhat-marketplace-sbjt9\" (UID: \"cabce219-6d9f-4aac-9402-ecf80e930f68\") " pod="openshift-marketplace/redhat-marketplace-sbjt9" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.663501 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cabce219-6d9f-4aac-9402-ecf80e930f68-catalog-content\") pod \"redhat-marketplace-sbjt9\" (UID: \"cabce219-6d9f-4aac-9402-ecf80e930f68\") " pod="openshift-marketplace/redhat-marketplace-sbjt9" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.686639 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fv8g\" (UniqueName: \"kubernetes.io/projected/cabce219-6d9f-4aac-9402-ecf80e930f68-kube-api-access-7fv8g\") pod \"redhat-marketplace-sbjt9\" (UID: \"cabce219-6d9f-4aac-9402-ecf80e930f68\") " pod="openshift-marketplace/redhat-marketplace-sbjt9" Dec 12 15:54:21 crc kubenswrapper[4693]: I1212 15:54:21.870327 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sbjt9" Dec 12 15:54:22 crc kubenswrapper[4693]: I1212 15:54:22.097494 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sbjt9"] Dec 12 15:54:22 crc kubenswrapper[4693]: W1212 15:54:22.098807 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcabce219_6d9f_4aac_9402_ecf80e930f68.slice/crio-1204baea188f5e8be30842f1d69537d44c72f779889727a78cfc63dc54455c3f WatchSource:0}: Error finding container 1204baea188f5e8be30842f1d69537d44c72f779889727a78cfc63dc54455c3f: Status 404 returned error can't find the container with id 1204baea188f5e8be30842f1d69537d44c72f779889727a78cfc63dc54455c3f Dec 12 15:54:22 crc kubenswrapper[4693]: I1212 15:54:22.600545 4693 generic.go:334] "Generic (PLEG): container finished" podID="cabce219-6d9f-4aac-9402-ecf80e930f68" containerID="3431033bf996bae05e6c8c836dddd484c4c86e491da919259d8423f83aef3a0e" exitCode=0 Dec 12 15:54:22 crc kubenswrapper[4693]: I1212 15:54:22.600895 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbjt9" event={"ID":"cabce219-6d9f-4aac-9402-ecf80e930f68","Type":"ContainerDied","Data":"3431033bf996bae05e6c8c836dddd484c4c86e491da919259d8423f83aef3a0e"} Dec 12 15:54:22 crc kubenswrapper[4693]: I1212 15:54:22.600926 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbjt9" event={"ID":"cabce219-6d9f-4aac-9402-ecf80e930f68","Type":"ContainerStarted","Data":"1204baea188f5e8be30842f1d69537d44c72f779889727a78cfc63dc54455c3f"} Dec 12 15:54:22 crc kubenswrapper[4693]: I1212 15:54:22.602663 4693 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 15:54:22 crc kubenswrapper[4693]: I1212 15:54:22.604351 4693 generic.go:334] "Generic (PLEG): container finished" podID="2dabbc2a-64f3-4f25-8b65-17ed75c51801" containerID="d3bd0db3ae195310e6c660978600cc124170e943e5c56a6b65283fd9f4c2498d" exitCode=0 Dec 12 15:54:22 crc kubenswrapper[4693]: I1212 15:54:22.605058 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5jh6" event={"ID":"2dabbc2a-64f3-4f25-8b65-17ed75c51801","Type":"ContainerDied","Data":"d3bd0db3ae195310e6c660978600cc124170e943e5c56a6b65283fd9f4c2498d"} Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.174798 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bhqb5"] Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.176962 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bhqb5" Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.179447 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.192481 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bhqb5"] Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.283452 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nljpl\" (UniqueName: \"kubernetes.io/projected/60ada46e-eb41-4339-a653-610721982c81-kube-api-access-nljpl\") pod \"redhat-operators-bhqb5\" (UID: \"60ada46e-eb41-4339-a653-610721982c81\") " pod="openshift-marketplace/redhat-operators-bhqb5" Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.283765 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60ada46e-eb41-4339-a653-610721982c81-catalog-content\") pod \"redhat-operators-bhqb5\" (UID: \"60ada46e-eb41-4339-a653-610721982c81\") " pod="openshift-marketplace/redhat-operators-bhqb5" Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.283803 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60ada46e-eb41-4339-a653-610721982c81-utilities\") pod \"redhat-operators-bhqb5\" (UID: \"60ada46e-eb41-4339-a653-610721982c81\") " pod="openshift-marketplace/redhat-operators-bhqb5" Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.384997 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60ada46e-eb41-4339-a653-610721982c81-catalog-content\") pod \"redhat-operators-bhqb5\" (UID: \"60ada46e-eb41-4339-a653-610721982c81\") " pod="openshift-marketplace/redhat-operators-bhqb5" Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.385403 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60ada46e-eb41-4339-a653-610721982c81-utilities\") pod \"redhat-operators-bhqb5\" (UID: \"60ada46e-eb41-4339-a653-610721982c81\") " pod="openshift-marketplace/redhat-operators-bhqb5" Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.385561 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nljpl\" (UniqueName: \"kubernetes.io/projected/60ada46e-eb41-4339-a653-610721982c81-kube-api-access-nljpl\") pod \"redhat-operators-bhqb5\" (UID: \"60ada46e-eb41-4339-a653-610721982c81\") " pod="openshift-marketplace/redhat-operators-bhqb5" Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.386060 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60ada46e-eb41-4339-a653-610721982c81-catalog-content\") pod \"redhat-operators-bhqb5\" (UID: \"60ada46e-eb41-4339-a653-610721982c81\") " pod="openshift-marketplace/redhat-operators-bhqb5" Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.386077 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60ada46e-eb41-4339-a653-610721982c81-utilities\") pod \"redhat-operators-bhqb5\" (UID: \"60ada46e-eb41-4339-a653-610721982c81\") " pod="openshift-marketplace/redhat-operators-bhqb5" Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.405118 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nljpl\" (UniqueName: \"kubernetes.io/projected/60ada46e-eb41-4339-a653-610721982c81-kube-api-access-nljpl\") pod \"redhat-operators-bhqb5\" (UID: \"60ada46e-eb41-4339-a653-610721982c81\") " pod="openshift-marketplace/redhat-operators-bhqb5" Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.521814 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bhqb5" Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.614873 4693 generic.go:334] "Generic (PLEG): container finished" podID="2dabbc2a-64f3-4f25-8b65-17ed75c51801" containerID="3d19879becaf4731ae227fb39d3b0c525692207323c149d1a228f01a58c027f0" exitCode=0 Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.614914 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5jh6" event={"ID":"2dabbc2a-64f3-4f25-8b65-17ed75c51801","Type":"ContainerDied","Data":"3d19879becaf4731ae227fb39d3b0c525692207323c149d1a228f01a58c027f0"} Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.738893 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bhqb5"] Dec 12 15:54:23 crc kubenswrapper[4693]: W1212 15:54:23.744613 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60ada46e_eb41_4339_a653_610721982c81.slice/crio-a82322f4b747a527d2d105204bee30c293f9e02f4084cd5b0f3df6ca31dba023 WatchSource:0}: Error finding container a82322f4b747a527d2d105204bee30c293f9e02f4084cd5b0f3df6ca31dba023: Status 404 returned error can't find the container with id a82322f4b747a527d2d105204bee30c293f9e02f4084cd5b0f3df6ca31dba023 Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.776141 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x27t6"] Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.778606 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x27t6" Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.781133 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.787417 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x27t6"] Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.891848 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjjgw\" (UniqueName: \"kubernetes.io/projected/bb6e1c71-15d0-4078-837b-0d0d7c9e981f-kube-api-access-mjjgw\") pod \"community-operators-x27t6\" (UID: \"bb6e1c71-15d0-4078-837b-0d0d7c9e981f\") " pod="openshift-marketplace/community-operators-x27t6" Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.891906 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6e1c71-15d0-4078-837b-0d0d7c9e981f-utilities\") pod \"community-operators-x27t6\" (UID: \"bb6e1c71-15d0-4078-837b-0d0d7c9e981f\") " pod="openshift-marketplace/community-operators-x27t6" Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.891930 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6e1c71-15d0-4078-837b-0d0d7c9e981f-catalog-content\") pod \"community-operators-x27t6\" (UID: \"bb6e1c71-15d0-4078-837b-0d0d7c9e981f\") " pod="openshift-marketplace/community-operators-x27t6" Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.993394 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjjgw\" (UniqueName: \"kubernetes.io/projected/bb6e1c71-15d0-4078-837b-0d0d7c9e981f-kube-api-access-mjjgw\") pod \"community-operators-x27t6\" (UID: \"bb6e1c71-15d0-4078-837b-0d0d7c9e981f\") " pod="openshift-marketplace/community-operators-x27t6" Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.993441 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6e1c71-15d0-4078-837b-0d0d7c9e981f-utilities\") pod \"community-operators-x27t6\" (UID: \"bb6e1c71-15d0-4078-837b-0d0d7c9e981f\") " pod="openshift-marketplace/community-operators-x27t6" Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.993472 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6e1c71-15d0-4078-837b-0d0d7c9e981f-catalog-content\") pod \"community-operators-x27t6\" (UID: \"bb6e1c71-15d0-4078-837b-0d0d7c9e981f\") " pod="openshift-marketplace/community-operators-x27t6" Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.993958 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6e1c71-15d0-4078-837b-0d0d7c9e981f-catalog-content\") pod \"community-operators-x27t6\" (UID: \"bb6e1c71-15d0-4078-837b-0d0d7c9e981f\") " pod="openshift-marketplace/community-operators-x27t6" Dec 12 15:54:23 crc kubenswrapper[4693]: I1212 15:54:23.994088 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6e1c71-15d0-4078-837b-0d0d7c9e981f-utilities\") pod \"community-operators-x27t6\" (UID: \"bb6e1c71-15d0-4078-837b-0d0d7c9e981f\") " pod="openshift-marketplace/community-operators-x27t6" Dec 12 15:54:24 crc kubenswrapper[4693]: I1212 15:54:24.017806 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjjgw\" (UniqueName: \"kubernetes.io/projected/bb6e1c71-15d0-4078-837b-0d0d7c9e981f-kube-api-access-mjjgw\") pod \"community-operators-x27t6\" (UID: \"bb6e1c71-15d0-4078-837b-0d0d7c9e981f\") " pod="openshift-marketplace/community-operators-x27t6" Dec 12 15:54:24 crc kubenswrapper[4693]: I1212 15:54:24.114077 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x27t6" Dec 12 15:54:24 crc kubenswrapper[4693]: I1212 15:54:24.294655 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x27t6"] Dec 12 15:54:24 crc kubenswrapper[4693]: W1212 15:54:24.302499 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb6e1c71_15d0_4078_837b_0d0d7c9e981f.slice/crio-d6195836066cf7da72a8244ee6a0b2efe30cd41ccd4b18e8939bb6e01cb2c6fd WatchSource:0}: Error finding container d6195836066cf7da72a8244ee6a0b2efe30cd41ccd4b18e8939bb6e01cb2c6fd: Status 404 returned error can't find the container with id d6195836066cf7da72a8244ee6a0b2efe30cd41ccd4b18e8939bb6e01cb2c6fd Dec 12 15:54:24 crc kubenswrapper[4693]: I1212 15:54:24.622143 4693 generic.go:334] "Generic (PLEG): container finished" podID="bb6e1c71-15d0-4078-837b-0d0d7c9e981f" containerID="d58edaa1a057fa896923e2fec1883889f56e7b5de9645d23fd43c532a3e7f2b9" exitCode=0 Dec 12 15:54:24 crc kubenswrapper[4693]: I1212 15:54:24.622203 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x27t6" event={"ID":"bb6e1c71-15d0-4078-837b-0d0d7c9e981f","Type":"ContainerDied","Data":"d58edaa1a057fa896923e2fec1883889f56e7b5de9645d23fd43c532a3e7f2b9"} Dec 12 15:54:24 crc kubenswrapper[4693]: I1212 15:54:24.622444 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x27t6" event={"ID":"bb6e1c71-15d0-4078-837b-0d0d7c9e981f","Type":"ContainerStarted","Data":"d6195836066cf7da72a8244ee6a0b2efe30cd41ccd4b18e8939bb6e01cb2c6fd"} Dec 12 15:54:24 crc kubenswrapper[4693]: I1212 15:54:24.626536 4693 generic.go:334] "Generic (PLEG): container finished" podID="60ada46e-eb41-4339-a653-610721982c81" containerID="86cac6c46c73f088ec934bddf6930525ea3bb3d407b188b9de528b97ef98606f" exitCode=0 Dec 12 15:54:24 crc kubenswrapper[4693]: I1212 15:54:24.626743 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhqb5" event={"ID":"60ada46e-eb41-4339-a653-610721982c81","Type":"ContainerDied","Data":"86cac6c46c73f088ec934bddf6930525ea3bb3d407b188b9de528b97ef98606f"} Dec 12 15:54:24 crc kubenswrapper[4693]: I1212 15:54:24.627472 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhqb5" event={"ID":"60ada46e-eb41-4339-a653-610721982c81","Type":"ContainerStarted","Data":"a82322f4b747a527d2d105204bee30c293f9e02f4084cd5b0f3df6ca31dba023"} Dec 12 15:54:24 crc kubenswrapper[4693]: I1212 15:54:24.631996 4693 generic.go:334] "Generic (PLEG): container finished" podID="cabce219-6d9f-4aac-9402-ecf80e930f68" containerID="6aab503eea24ecd818b770a29eccb5957a4a978805ca221681a46a7e9120c049" exitCode=0 Dec 12 15:54:24 crc kubenswrapper[4693]: I1212 15:54:24.632067 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbjt9" event={"ID":"cabce219-6d9f-4aac-9402-ecf80e930f68","Type":"ContainerDied","Data":"6aab503eea24ecd818b770a29eccb5957a4a978805ca221681a46a7e9120c049"} Dec 12 15:54:24 crc kubenswrapper[4693]: I1212 15:54:24.636487 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5jh6" event={"ID":"2dabbc2a-64f3-4f25-8b65-17ed75c51801","Type":"ContainerStarted","Data":"baac4e20b8610ebe1b070a44df663c53a762a529e2d9e71ba3d832a73ae486e6"} Dec 12 15:54:24 crc kubenswrapper[4693]: I1212 15:54:24.695082 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q5jh6" podStartSLOduration=2.976557655 podStartE2EDuration="4.695061214s" podCreationTimestamp="2025-12-12 15:54:20 +0000 UTC" firstStartedPulling="2025-12-12 15:54:22.605593947 +0000 UTC m=+489.774233548" lastFinishedPulling="2025-12-12 15:54:24.324097506 +0000 UTC m=+491.492737107" observedRunningTime="2025-12-12 15:54:24.692848252 +0000 UTC m=+491.861487873" watchObservedRunningTime="2025-12-12 15:54:24.695061214 +0000 UTC m=+491.863700825" Dec 12 15:54:25 crc kubenswrapper[4693]: I1212 15:54:25.644411 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbjt9" event={"ID":"cabce219-6d9f-4aac-9402-ecf80e930f68","Type":"ContainerStarted","Data":"0325aa781ea934ee1b75d71ea522211958ede90495dbe62a8af90e04c0c7479f"} Dec 12 15:54:25 crc kubenswrapper[4693]: I1212 15:54:25.647384 4693 generic.go:334] "Generic (PLEG): container finished" podID="bb6e1c71-15d0-4078-837b-0d0d7c9e981f" containerID="d377c39b24d52e3d525c13b4625fcac95ceb89856622ef914b07638d9004f66c" exitCode=0 Dec 12 15:54:25 crc kubenswrapper[4693]: I1212 15:54:25.647536 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x27t6" event={"ID":"bb6e1c71-15d0-4078-837b-0d0d7c9e981f","Type":"ContainerDied","Data":"d377c39b24d52e3d525c13b4625fcac95ceb89856622ef914b07638d9004f66c"} Dec 12 15:54:25 crc kubenswrapper[4693]: I1212 15:54:25.650534 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhqb5" event={"ID":"60ada46e-eb41-4339-a653-610721982c81","Type":"ContainerStarted","Data":"455e29d558e3b4670d5a0d7f00ec349e0868125cdfd8ce7724a92e83a7364d0a"} Dec 12 15:54:25 crc kubenswrapper[4693]: I1212 15:54:25.666619 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sbjt9" podStartSLOduration=2.183258953 podStartE2EDuration="4.666599995s" podCreationTimestamp="2025-12-12 15:54:21 +0000 UTC" firstStartedPulling="2025-12-12 15:54:22.602476769 +0000 UTC m=+489.771116370" lastFinishedPulling="2025-12-12 15:54:25.085817811 +0000 UTC m=+492.254457412" observedRunningTime="2025-12-12 15:54:25.665938926 +0000 UTC m=+492.834578517" watchObservedRunningTime="2025-12-12 15:54:25.666599995 +0000 UTC m=+492.835239606" Dec 12 15:54:26 crc kubenswrapper[4693]: I1212 15:54:26.658361 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x27t6" event={"ID":"bb6e1c71-15d0-4078-837b-0d0d7c9e981f","Type":"ContainerStarted","Data":"9b5d5be6b4c3cded0da2add2ab5075aff8744567e0fc1a75dc42540dd9f943ad"} Dec 12 15:54:26 crc kubenswrapper[4693]: I1212 15:54:26.659954 4693 generic.go:334] "Generic (PLEG): container finished" podID="60ada46e-eb41-4339-a653-610721982c81" containerID="455e29d558e3b4670d5a0d7f00ec349e0868125cdfd8ce7724a92e83a7364d0a" exitCode=0 Dec 12 15:54:26 crc kubenswrapper[4693]: I1212 15:54:26.660046 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhqb5" event={"ID":"60ada46e-eb41-4339-a653-610721982c81","Type":"ContainerDied","Data":"455e29d558e3b4670d5a0d7f00ec349e0868125cdfd8ce7724a92e83a7364d0a"} Dec 12 15:54:26 crc kubenswrapper[4693]: I1212 15:54:26.676369 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x27t6" podStartSLOduration=2.1898297429999998 podStartE2EDuration="3.676352142s" podCreationTimestamp="2025-12-12 15:54:23 +0000 UTC" firstStartedPulling="2025-12-12 15:54:24.623813206 +0000 UTC m=+491.792452817" lastFinishedPulling="2025-12-12 15:54:26.110335615 +0000 UTC m=+493.278975216" observedRunningTime="2025-12-12 15:54:26.67593637 +0000 UTC m=+493.844575981" watchObservedRunningTime="2025-12-12 15:54:26.676352142 +0000 UTC m=+493.844991743" Dec 12 15:54:27 crc kubenswrapper[4693]: I1212 15:54:27.668686 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhqb5" event={"ID":"60ada46e-eb41-4339-a653-610721982c81","Type":"ContainerStarted","Data":"3e36f0f008f0523f9232b639bd37f432d37978155b96dc75dbe5bc3c72d9857b"} Dec 12 15:54:27 crc kubenswrapper[4693]: I1212 15:54:27.689567 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bhqb5" podStartSLOduration=2.163144541 podStartE2EDuration="4.689550317s" podCreationTimestamp="2025-12-12 15:54:23 +0000 UTC" firstStartedPulling="2025-12-12 15:54:24.629250639 +0000 UTC m=+491.797890240" lastFinishedPulling="2025-12-12 15:54:27.155656415 +0000 UTC m=+494.324296016" observedRunningTime="2025-12-12 15:54:27.686803 +0000 UTC m=+494.855442611" watchObservedRunningTime="2025-12-12 15:54:27.689550317 +0000 UTC m=+494.858189918" Dec 12 15:54:31 crc kubenswrapper[4693]: I1212 15:54:31.105687 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q5jh6" Dec 12 15:54:31 crc kubenswrapper[4693]: I1212 15:54:31.106441 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q5jh6" Dec 12 15:54:31 crc kubenswrapper[4693]: I1212 15:54:31.163385 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q5jh6" Dec 12 15:54:31 crc kubenswrapper[4693]: I1212 15:54:31.731264 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q5jh6" Dec 12 15:54:31 crc kubenswrapper[4693]: I1212 15:54:31.870964 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sbjt9" Dec 12 15:54:31 crc kubenswrapper[4693]: I1212 15:54:31.871025 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sbjt9" Dec 12 15:54:31 crc kubenswrapper[4693]: I1212 15:54:31.925176 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sbjt9" Dec 12 15:54:32 crc kubenswrapper[4693]: I1212 15:54:32.731487 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sbjt9" Dec 12 15:54:33 crc kubenswrapper[4693]: I1212 15:54:33.522217 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bhqb5" Dec 12 15:54:33 crc kubenswrapper[4693]: I1212 15:54:33.522563 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bhqb5" Dec 12 15:54:33 crc kubenswrapper[4693]: I1212 15:54:33.581009 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bhqb5" Dec 12 15:54:33 crc kubenswrapper[4693]: I1212 15:54:33.739317 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bhqb5" Dec 12 15:54:34 crc kubenswrapper[4693]: I1212 15:54:34.114897 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x27t6" Dec 12 15:54:34 crc kubenswrapper[4693]: I1212 15:54:34.114978 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x27t6" Dec 12 15:54:34 crc kubenswrapper[4693]: I1212 15:54:34.160827 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x27t6" Dec 12 15:54:34 crc kubenswrapper[4693]: I1212 15:54:34.745573 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x27t6" Dec 12 15:54:48 crc kubenswrapper[4693]: I1212 15:54:48.716592 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9stx"] Dec 12 15:54:48 crc kubenswrapper[4693]: I1212 15:54:48.718042 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9stx" Dec 12 15:54:48 crc kubenswrapper[4693]: I1212 15:54:48.725075 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9stx"] Dec 12 15:54:48 crc kubenswrapper[4693]: I1212 15:54:48.725185 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Dec 12 15:54:48 crc kubenswrapper[4693]: I1212 15:54:48.725643 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Dec 12 15:54:48 crc kubenswrapper[4693]: I1212 15:54:48.725845 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Dec 12 15:54:48 crc kubenswrapper[4693]: I1212 15:54:48.726577 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Dec 12 15:54:48 crc kubenswrapper[4693]: I1212 15:54:48.726808 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Dec 12 15:54:48 crc kubenswrapper[4693]: I1212 15:54:48.793917 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/3009047a-0b78-433c-818a-8347e4566b47-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-g9stx\" (UID: \"3009047a-0b78-433c-818a-8347e4566b47\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9stx" Dec 12 15:54:48 crc kubenswrapper[4693]: I1212 15:54:48.794017 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/3009047a-0b78-433c-818a-8347e4566b47-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-g9stx\" (UID: \"3009047a-0b78-433c-818a-8347e4566b47\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9stx" Dec 12 15:54:48 crc kubenswrapper[4693]: I1212 15:54:48.794072 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcf8f\" (UniqueName: \"kubernetes.io/projected/3009047a-0b78-433c-818a-8347e4566b47-kube-api-access-qcf8f\") pod \"cluster-monitoring-operator-6d5b84845-g9stx\" (UID: \"3009047a-0b78-433c-818a-8347e4566b47\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9stx" Dec 12 15:54:48 crc kubenswrapper[4693]: I1212 15:54:48.895221 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/3009047a-0b78-433c-818a-8347e4566b47-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-g9stx\" (UID: \"3009047a-0b78-433c-818a-8347e4566b47\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9stx" Dec 12 15:54:48 crc kubenswrapper[4693]: I1212 15:54:48.895357 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/3009047a-0b78-433c-818a-8347e4566b47-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-g9stx\" (UID: \"3009047a-0b78-433c-818a-8347e4566b47\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9stx" Dec 12 15:54:48 crc kubenswrapper[4693]: I1212 15:54:48.895429 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcf8f\" (UniqueName: \"kubernetes.io/projected/3009047a-0b78-433c-818a-8347e4566b47-kube-api-access-qcf8f\") pod \"cluster-monitoring-operator-6d5b84845-g9stx\" (UID: \"3009047a-0b78-433c-818a-8347e4566b47\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9stx" Dec 12 15:54:48 crc kubenswrapper[4693]: I1212 15:54:48.897161 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/3009047a-0b78-433c-818a-8347e4566b47-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-g9stx\" (UID: \"3009047a-0b78-433c-818a-8347e4566b47\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9stx" Dec 12 15:54:48 crc kubenswrapper[4693]: I1212 15:54:48.904044 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/3009047a-0b78-433c-818a-8347e4566b47-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-g9stx\" (UID: \"3009047a-0b78-433c-818a-8347e4566b47\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9stx" Dec 12 15:54:48 crc kubenswrapper[4693]: I1212 15:54:48.920922 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcf8f\" (UniqueName: \"kubernetes.io/projected/3009047a-0b78-433c-818a-8347e4566b47-kube-api-access-qcf8f\") pod \"cluster-monitoring-operator-6d5b84845-g9stx\" (UID: \"3009047a-0b78-433c-818a-8347e4566b47\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9stx" Dec 12 15:54:49 crc kubenswrapper[4693]: I1212 15:54:49.037155 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9stx" Dec 12 15:54:49 crc kubenswrapper[4693]: I1212 15:54:49.468750 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9stx"] Dec 12 15:54:49 crc kubenswrapper[4693]: I1212 15:54:49.788308 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9stx" event={"ID":"3009047a-0b78-433c-818a-8347e4566b47","Type":"ContainerStarted","Data":"fd8c5458ee2452b33c34df5ebde45ce25ec90c4615964593a6f55dd1b4450ff0"} Dec 12 15:54:52 crc kubenswrapper[4693]: I1212 15:54:52.808760 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9stx" event={"ID":"3009047a-0b78-433c-818a-8347e4566b47","Type":"ContainerStarted","Data":"12e88099d196ec10095240de5878237f3ea291c3f1f05312f060e5f68d3f20f2"} Dec 12 15:54:53 crc kubenswrapper[4693]: I1212 15:54:53.207343 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44"] Dec 12 15:54:53 crc kubenswrapper[4693]: I1212 15:54:53.207971 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" Dec 12 15:54:53 crc kubenswrapper[4693]: I1212 15:54:53.209997 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-b2znx" Dec 12 15:54:53 crc kubenswrapper[4693]: I1212 15:54:53.210063 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Dec 12 15:54:53 crc kubenswrapper[4693]: I1212 15:54:53.229658 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44"] Dec 12 15:54:53 crc kubenswrapper[4693]: I1212 15:54:53.259783 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/cae4711d-6ae1-402f-9fc8-751998ed785d-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-zwt44\" (UID: \"cae4711d-6ae1-402f-9fc8-751998ed785d\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" Dec 12 15:54:53 crc kubenswrapper[4693]: I1212 15:54:53.360759 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/cae4711d-6ae1-402f-9fc8-751998ed785d-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-zwt44\" (UID: \"cae4711d-6ae1-402f-9fc8-751998ed785d\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" Dec 12 15:54:53 crc kubenswrapper[4693]: I1212 15:54:53.380073 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/cae4711d-6ae1-402f-9fc8-751998ed785d-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-zwt44\" (UID: \"cae4711d-6ae1-402f-9fc8-751998ed785d\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" Dec 12 15:54:53 crc kubenswrapper[4693]: I1212 15:54:53.522606 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" Dec 12 15:54:53 crc kubenswrapper[4693]: I1212 15:54:53.727482 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44"] Dec 12 15:54:53 crc kubenswrapper[4693]: I1212 15:54:53.815195 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" event={"ID":"cae4711d-6ae1-402f-9fc8-751998ed785d","Type":"ContainerStarted","Data":"96f828bbff06b50a308fdf83a74773afa0a529770825a0cb35d9f0ccabcf7429"} Dec 12 15:54:53 crc kubenswrapper[4693]: I1212 15:54:53.838485 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9stx" podStartSLOduration=2.8463807 podStartE2EDuration="5.838459803s" podCreationTimestamp="2025-12-12 15:54:48 +0000 UTC" firstStartedPulling="2025-12-12 15:54:49.483129319 +0000 UTC m=+516.651768950" lastFinishedPulling="2025-12-12 15:54:52.475208452 +0000 UTC m=+519.643848053" observedRunningTime="2025-12-12 15:54:53.831921516 +0000 UTC m=+521.000561117" watchObservedRunningTime="2025-12-12 15:54:53.838459803 +0000 UTC m=+521.007099404" Dec 12 15:54:55 crc kubenswrapper[4693]: I1212 15:54:55.831243 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" event={"ID":"cae4711d-6ae1-402f-9fc8-751998ed785d","Type":"ContainerStarted","Data":"11c8d1c7d9c661001bf7db151b3c68c56edce7a8688ad7a11c612875c0194270"} Dec 12 15:54:55 crc kubenswrapper[4693]: I1212 15:54:55.831888 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" Dec 12 15:54:55 crc kubenswrapper[4693]: I1212 15:54:55.837819 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" Dec 12 15:54:55 crc kubenswrapper[4693]: I1212 15:54:55.857426 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" podStartSLOduration=0.996416641 podStartE2EDuration="2.857387242s" podCreationTimestamp="2025-12-12 15:54:53 +0000 UTC" firstStartedPulling="2025-12-12 15:54:53.73402908 +0000 UTC m=+520.902668671" lastFinishedPulling="2025-12-12 15:54:55.594999671 +0000 UTC m=+522.763639272" observedRunningTime="2025-12-12 15:54:55.846518968 +0000 UTC m=+523.015158569" watchObservedRunningTime="2025-12-12 15:54:55.857387242 +0000 UTC m=+523.026026873" Dec 12 15:54:56 crc kubenswrapper[4693]: I1212 15:54:56.297114 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-s97gq"] Dec 12 15:54:56 crc kubenswrapper[4693]: I1212 15:54:56.298195 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-s97gq" Dec 12 15:54:56 crc kubenswrapper[4693]: I1212 15:54:56.303282 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Dec 12 15:54:56 crc kubenswrapper[4693]: I1212 15:54:56.303726 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Dec 12 15:54:56 crc kubenswrapper[4693]: I1212 15:54:56.305730 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-zjzmp" Dec 12 15:54:56 crc kubenswrapper[4693]: I1212 15:54:56.307493 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Dec 12 15:54:56 crc kubenswrapper[4693]: I1212 15:54:56.314233 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-s97gq"] Dec 12 15:54:56 crc kubenswrapper[4693]: I1212 15:54:56.404452 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/fc493289-8c0f-41fe-bc1e-6ebbf9a40815-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-s97gq\" (UID: \"fc493289-8c0f-41fe-bc1e-6ebbf9a40815\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s97gq" Dec 12 15:54:56 crc kubenswrapper[4693]: I1212 15:54:56.404657 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fc493289-8c0f-41fe-bc1e-6ebbf9a40815-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-s97gq\" (UID: \"fc493289-8c0f-41fe-bc1e-6ebbf9a40815\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s97gq" Dec 12 15:54:56 crc kubenswrapper[4693]: I1212 15:54:56.404718 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fc493289-8c0f-41fe-bc1e-6ebbf9a40815-metrics-client-ca\") pod \"prometheus-operator-db54df47d-s97gq\" (UID: \"fc493289-8c0f-41fe-bc1e-6ebbf9a40815\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s97gq" Dec 12 15:54:56 crc kubenswrapper[4693]: I1212 15:54:56.404945 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr8zj\" (UniqueName: \"kubernetes.io/projected/fc493289-8c0f-41fe-bc1e-6ebbf9a40815-kube-api-access-jr8zj\") pod \"prometheus-operator-db54df47d-s97gq\" (UID: \"fc493289-8c0f-41fe-bc1e-6ebbf9a40815\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s97gq" Dec 12 15:54:56 crc kubenswrapper[4693]: I1212 15:54:56.506186 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fc493289-8c0f-41fe-bc1e-6ebbf9a40815-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-s97gq\" (UID: \"fc493289-8c0f-41fe-bc1e-6ebbf9a40815\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s97gq" Dec 12 15:54:56 crc kubenswrapper[4693]: I1212 15:54:56.506251 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fc493289-8c0f-41fe-bc1e-6ebbf9a40815-metrics-client-ca\") pod \"prometheus-operator-db54df47d-s97gq\" (UID: \"fc493289-8c0f-41fe-bc1e-6ebbf9a40815\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s97gq" Dec 12 15:54:56 crc kubenswrapper[4693]: I1212 15:54:56.506338 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr8zj\" (UniqueName: \"kubernetes.io/projected/fc493289-8c0f-41fe-bc1e-6ebbf9a40815-kube-api-access-jr8zj\") pod \"prometheus-operator-db54df47d-s97gq\" (UID: \"fc493289-8c0f-41fe-bc1e-6ebbf9a40815\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s97gq" Dec 12 15:54:56 crc kubenswrapper[4693]: I1212 15:54:56.506380 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/fc493289-8c0f-41fe-bc1e-6ebbf9a40815-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-s97gq\" (UID: \"fc493289-8c0f-41fe-bc1e-6ebbf9a40815\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s97gq" Dec 12 15:54:56 crc kubenswrapper[4693]: E1212 15:54:56.506522 4693 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Dec 12 15:54:56 crc kubenswrapper[4693]: E1212 15:54:56.506595 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc493289-8c0f-41fe-bc1e-6ebbf9a40815-prometheus-operator-tls podName:fc493289-8c0f-41fe-bc1e-6ebbf9a40815 nodeName:}" failed. No retries permitted until 2025-12-12 15:54:57.006573906 +0000 UTC m=+524.175213497 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/fc493289-8c0f-41fe-bc1e-6ebbf9a40815-prometheus-operator-tls") pod "prometheus-operator-db54df47d-s97gq" (UID: "fc493289-8c0f-41fe-bc1e-6ebbf9a40815") : secret "prometheus-operator-tls" not found Dec 12 15:54:56 crc kubenswrapper[4693]: I1212 15:54:56.507641 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fc493289-8c0f-41fe-bc1e-6ebbf9a40815-metrics-client-ca\") pod \"prometheus-operator-db54df47d-s97gq\" (UID: \"fc493289-8c0f-41fe-bc1e-6ebbf9a40815\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s97gq" Dec 12 15:54:56 crc kubenswrapper[4693]: I1212 15:54:56.516589 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fc493289-8c0f-41fe-bc1e-6ebbf9a40815-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-s97gq\" (UID: \"fc493289-8c0f-41fe-bc1e-6ebbf9a40815\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s97gq" Dec 12 15:54:56 crc kubenswrapper[4693]: I1212 15:54:56.523065 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr8zj\" (UniqueName: \"kubernetes.io/projected/fc493289-8c0f-41fe-bc1e-6ebbf9a40815-kube-api-access-jr8zj\") pod \"prometheus-operator-db54df47d-s97gq\" (UID: \"fc493289-8c0f-41fe-bc1e-6ebbf9a40815\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s97gq" Dec 12 15:54:57 crc kubenswrapper[4693]: I1212 15:54:57.012893 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/fc493289-8c0f-41fe-bc1e-6ebbf9a40815-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-s97gq\" (UID: \"fc493289-8c0f-41fe-bc1e-6ebbf9a40815\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s97gq" Dec 12 15:54:57 crc kubenswrapper[4693]: I1212 15:54:57.017640 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/fc493289-8c0f-41fe-bc1e-6ebbf9a40815-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-s97gq\" (UID: \"fc493289-8c0f-41fe-bc1e-6ebbf9a40815\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s97gq" Dec 12 15:54:57 crc kubenswrapper[4693]: I1212 15:54:57.219257 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-s97gq" Dec 12 15:54:57 crc kubenswrapper[4693]: I1212 15:54:57.445251 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-s97gq"] Dec 12 15:54:57 crc kubenswrapper[4693]: W1212 15:54:57.468819 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc493289_8c0f_41fe_bc1e_6ebbf9a40815.slice/crio-dcf4cfed2d110660dd20e5ae303fb17abb9d6dea44e6c9bcab046bbc95984b55 WatchSource:0}: Error finding container dcf4cfed2d110660dd20e5ae303fb17abb9d6dea44e6c9bcab046bbc95984b55: Status 404 returned error can't find the container with id dcf4cfed2d110660dd20e5ae303fb17abb9d6dea44e6c9bcab046bbc95984b55 Dec 12 15:54:57 crc kubenswrapper[4693]: I1212 15:54:57.850612 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-s97gq" event={"ID":"fc493289-8c0f-41fe-bc1e-6ebbf9a40815","Type":"ContainerStarted","Data":"dcf4cfed2d110660dd20e5ae303fb17abb9d6dea44e6c9bcab046bbc95984b55"} Dec 12 15:54:59 crc kubenswrapper[4693]: I1212 15:54:59.863023 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-s97gq" event={"ID":"fc493289-8c0f-41fe-bc1e-6ebbf9a40815","Type":"ContainerStarted","Data":"48d74e89164f67554a6926f1fbca266f2b6d061725d581235e1db9e2a6a4c770"} Dec 12 15:54:59 crc kubenswrapper[4693]: I1212 15:54:59.863382 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-s97gq" event={"ID":"fc493289-8c0f-41fe-bc1e-6ebbf9a40815","Type":"ContainerStarted","Data":"b816ca67d266bddab83169127b6d18c85fe7b6885d765b6bca56f17569272432"} Dec 12 15:54:59 crc kubenswrapper[4693]: I1212 15:54:59.888501 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-s97gq" podStartSLOduration=2.41879714 podStartE2EDuration="3.88825949s" podCreationTimestamp="2025-12-12 15:54:56 +0000 UTC" firstStartedPulling="2025-12-12 15:54:57.472913479 +0000 UTC m=+524.641553080" lastFinishedPulling="2025-12-12 15:54:58.942375829 +0000 UTC m=+526.111015430" observedRunningTime="2025-12-12 15:54:59.883208504 +0000 UTC m=+527.051848105" watchObservedRunningTime="2025-12-12 15:54:59.88825949 +0000 UTC m=+527.056899091" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.665339 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-4s7rf"] Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.667321 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4s7rf" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.681659 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.681848 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.682374 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-rlf5s" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.685948 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-4s7rf"] Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.701712 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx"] Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.702879 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.706232 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-tlj2z" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.706641 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.707011 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.707157 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.729121 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx"] Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.736222 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-mpdpm"] Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.737714 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.742507 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.742600 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.742733 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-nhgl4" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.777924 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f97037e9-7ec8-4651-bd68-cb85c8b6d6d8-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-9xjzx\" (UID: \"f97037e9-7ec8-4651-bd68-cb85c8b6d6d8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.777996 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/f97037e9-7ec8-4651-bd68-cb85c8b6d6d8-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-9xjzx\" (UID: \"f97037e9-7ec8-4651-bd68-cb85c8b6d6d8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.778037 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx6z8\" (UniqueName: \"kubernetes.io/projected/101a48bf-f905-49c5-aa8f-c4f4b5555d8e-kube-api-access-dx6z8\") pod \"openshift-state-metrics-566fddb674-4s7rf\" (UID: \"101a48bf-f905-49c5-aa8f-c4f4b5555d8e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4s7rf" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.778206 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/f61a03b7-8079-47cf-a798-ce9e271f58d9-node-exporter-textfile\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.778264 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f97037e9-7ec8-4651-bd68-cb85c8b6d6d8-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-9xjzx\" (UID: \"f97037e9-7ec8-4651-bd68-cb85c8b6d6d8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.778470 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f61a03b7-8079-47cf-a798-ce9e271f58d9-sys\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.778519 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m86dt\" (UniqueName: \"kubernetes.io/projected/f97037e9-7ec8-4651-bd68-cb85c8b6d6d8-kube-api-access-m86dt\") pod \"kube-state-metrics-777cb5bd5d-9xjzx\" (UID: \"f97037e9-7ec8-4651-bd68-cb85c8b6d6d8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.778597 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/f61a03b7-8079-47cf-a798-ce9e271f58d9-node-exporter-wtmp\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.778631 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ls78\" (UniqueName: \"kubernetes.io/projected/f61a03b7-8079-47cf-a798-ce9e271f58d9-kube-api-access-2ls78\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.778667 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/f97037e9-7ec8-4651-bd68-cb85c8b6d6d8-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-9xjzx\" (UID: \"f97037e9-7ec8-4651-bd68-cb85c8b6d6d8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.778765 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f61a03b7-8079-47cf-a798-ce9e271f58d9-metrics-client-ca\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.778790 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/101a48bf-f905-49c5-aa8f-c4f4b5555d8e-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-4s7rf\" (UID: \"101a48bf-f905-49c5-aa8f-c4f4b5555d8e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4s7rf" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.778831 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f61a03b7-8079-47cf-a798-ce9e271f58d9-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.778860 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/f61a03b7-8079-47cf-a798-ce9e271f58d9-node-exporter-tls\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.778933 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/101a48bf-f905-49c5-aa8f-c4f4b5555d8e-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-4s7rf\" (UID: \"101a48bf-f905-49c5-aa8f-c4f4b5555d8e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4s7rf" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.779112 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/101a48bf-f905-49c5-aa8f-c4f4b5555d8e-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-4s7rf\" (UID: \"101a48bf-f905-49c5-aa8f-c4f4b5555d8e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4s7rf" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.779198 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f97037e9-7ec8-4651-bd68-cb85c8b6d6d8-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9xjzx\" (UID: \"f97037e9-7ec8-4651-bd68-cb85c8b6d6d8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.779236 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/f61a03b7-8079-47cf-a798-ce9e271f58d9-root\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.880434 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f97037e9-7ec8-4651-bd68-cb85c8b6d6d8-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9xjzx\" (UID: \"f97037e9-7ec8-4651-bd68-cb85c8b6d6d8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.880717 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/f61a03b7-8079-47cf-a798-ce9e271f58d9-root\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.880812 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f97037e9-7ec8-4651-bd68-cb85c8b6d6d8-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-9xjzx\" (UID: \"f97037e9-7ec8-4651-bd68-cb85c8b6d6d8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.880905 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/f97037e9-7ec8-4651-bd68-cb85c8b6d6d8-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-9xjzx\" (UID: \"f97037e9-7ec8-4651-bd68-cb85c8b6d6d8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.880996 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx6z8\" (UniqueName: \"kubernetes.io/projected/101a48bf-f905-49c5-aa8f-c4f4b5555d8e-kube-api-access-dx6z8\") pod \"openshift-state-metrics-566fddb674-4s7rf\" (UID: \"101a48bf-f905-49c5-aa8f-c4f4b5555d8e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4s7rf" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.881079 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/f61a03b7-8079-47cf-a798-ce9e271f58d9-node-exporter-textfile\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.881168 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f97037e9-7ec8-4651-bd68-cb85c8b6d6d8-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-9xjzx\" (UID: \"f97037e9-7ec8-4651-bd68-cb85c8b6d6d8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.880862 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/f61a03b7-8079-47cf-a798-ce9e271f58d9-root\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.881533 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/f61a03b7-8079-47cf-a798-ce9e271f58d9-node-exporter-textfile\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.881959 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/f97037e9-7ec8-4651-bd68-cb85c8b6d6d8-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-9xjzx\" (UID: \"f97037e9-7ec8-4651-bd68-cb85c8b6d6d8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.882206 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f97037e9-7ec8-4651-bd68-cb85c8b6d6d8-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-9xjzx\" (UID: \"f97037e9-7ec8-4651-bd68-cb85c8b6d6d8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.882336 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f61a03b7-8079-47cf-a798-ce9e271f58d9-sys\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.882415 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m86dt\" (UniqueName: \"kubernetes.io/projected/f97037e9-7ec8-4651-bd68-cb85c8b6d6d8-kube-api-access-m86dt\") pod \"kube-state-metrics-777cb5bd5d-9xjzx\" (UID: \"f97037e9-7ec8-4651-bd68-cb85c8b6d6d8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.882504 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/f61a03b7-8079-47cf-a798-ce9e271f58d9-node-exporter-wtmp\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.882580 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ls78\" (UniqueName: \"kubernetes.io/projected/f61a03b7-8079-47cf-a798-ce9e271f58d9-kube-api-access-2ls78\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.882659 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/f97037e9-7ec8-4651-bd68-cb85c8b6d6d8-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-9xjzx\" (UID: \"f97037e9-7ec8-4651-bd68-cb85c8b6d6d8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.882751 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f61a03b7-8079-47cf-a798-ce9e271f58d9-metrics-client-ca\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.882825 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/101a48bf-f905-49c5-aa8f-c4f4b5555d8e-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-4s7rf\" (UID: \"101a48bf-f905-49c5-aa8f-c4f4b5555d8e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4s7rf" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.882902 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f61a03b7-8079-47cf-a798-ce9e271f58d9-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.882978 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/f61a03b7-8079-47cf-a798-ce9e271f58d9-node-exporter-tls\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.883068 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/101a48bf-f905-49c5-aa8f-c4f4b5555d8e-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-4s7rf\" (UID: \"101a48bf-f905-49c5-aa8f-c4f4b5555d8e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4s7rf" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.883156 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/101a48bf-f905-49c5-aa8f-c4f4b5555d8e-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-4s7rf\" (UID: \"101a48bf-f905-49c5-aa8f-c4f4b5555d8e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4s7rf" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.883610 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f61a03b7-8079-47cf-a798-ce9e271f58d9-sys\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.883614 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/f97037e9-7ec8-4651-bd68-cb85c8b6d6d8-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-9xjzx\" (UID: \"f97037e9-7ec8-4651-bd68-cb85c8b6d6d8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.883710 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/f61a03b7-8079-47cf-a798-ce9e271f58d9-node-exporter-wtmp\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.885894 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f61a03b7-8079-47cf-a798-ce9e271f58d9-metrics-client-ca\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.886669 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/101a48bf-f905-49c5-aa8f-c4f4b5555d8e-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-4s7rf\" (UID: \"101a48bf-f905-49c5-aa8f-c4f4b5555d8e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4s7rf" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.889360 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/101a48bf-f905-49c5-aa8f-c4f4b5555d8e-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-4s7rf\" (UID: \"101a48bf-f905-49c5-aa8f-c4f4b5555d8e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4s7rf" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.890642 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f61a03b7-8079-47cf-a798-ce9e271f58d9-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.890871 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/101a48bf-f905-49c5-aa8f-c4f4b5555d8e-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-4s7rf\" (UID: \"101a48bf-f905-49c5-aa8f-c4f4b5555d8e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4s7rf" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.891941 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/f61a03b7-8079-47cf-a798-ce9e271f58d9-node-exporter-tls\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.895992 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f97037e9-7ec8-4651-bd68-cb85c8b6d6d8-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9xjzx\" (UID: \"f97037e9-7ec8-4651-bd68-cb85c8b6d6d8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.909507 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f97037e9-7ec8-4651-bd68-cb85c8b6d6d8-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-9xjzx\" (UID: \"f97037e9-7ec8-4651-bd68-cb85c8b6d6d8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.915162 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m86dt\" (UniqueName: \"kubernetes.io/projected/f97037e9-7ec8-4651-bd68-cb85c8b6d6d8-kube-api-access-m86dt\") pod \"kube-state-metrics-777cb5bd5d-9xjzx\" (UID: \"f97037e9-7ec8-4651-bd68-cb85c8b6d6d8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.915588 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ls78\" (UniqueName: \"kubernetes.io/projected/f61a03b7-8079-47cf-a798-ce9e271f58d9-kube-api-access-2ls78\") pod \"node-exporter-mpdpm\" (UID: \"f61a03b7-8079-47cf-a798-ce9e271f58d9\") " pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.916813 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx6z8\" (UniqueName: \"kubernetes.io/projected/101a48bf-f905-49c5-aa8f-c4f4b5555d8e-kube-api-access-dx6z8\") pod \"openshift-state-metrics-566fddb674-4s7rf\" (UID: \"101a48bf-f905-49c5-aa8f-c4f4b5555d8e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4s7rf" Dec 12 15:55:01 crc kubenswrapper[4693]: I1212 15:55:01.984540 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4s7rf" Dec 12 15:55:02 crc kubenswrapper[4693]: I1212 15:55:02.017323 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" Dec 12 15:55:02 crc kubenswrapper[4693]: I1212 15:55:02.055106 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-mpdpm" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.212324 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-4s7rf"] Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.326769 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx"] Dec 12 15:55:03 crc kubenswrapper[4693]: W1212 15:55:02.338398 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf97037e9_7ec8_4651_bd68_cb85c8b6d6d8.slice/crio-66e07b6a07b58e899be7cf5dc81b4a4c09e6dddec01e8cb8169177aff8238392 WatchSource:0}: Error finding container 66e07b6a07b58e899be7cf5dc81b4a4c09e6dddec01e8cb8169177aff8238392: Status 404 returned error can't find the container with id 66e07b6a07b58e899be7cf5dc81b4a4c09e6dddec01e8cb8169177aff8238392 Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.735352 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.738362 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.742190 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.742267 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.742341 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.757902 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.757974 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-mb2pb" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.758140 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.758310 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.761515 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.764957 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.767752 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.812220 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/84c903fa-ee30-441e-b8a2-b1e5825763cc-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.812662 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/84c903fa-ee30-441e-b8a2-b1e5825763cc-web-config\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.812721 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84c903fa-ee30-441e-b8a2-b1e5825763cc-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.812775 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/84c903fa-ee30-441e-b8a2-b1e5825763cc-config-out\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.812798 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/84c903fa-ee30-441e-b8a2-b1e5825763cc-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.812823 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cht5w\" (UniqueName: \"kubernetes.io/projected/84c903fa-ee30-441e-b8a2-b1e5825763cc-kube-api-access-cht5w\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.812842 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/84c903fa-ee30-441e-b8a2-b1e5825763cc-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.812866 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/84c903fa-ee30-441e-b8a2-b1e5825763cc-tls-assets\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.812886 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/84c903fa-ee30-441e-b8a2-b1e5825763cc-config-volume\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.812916 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/84c903fa-ee30-441e-b8a2-b1e5825763cc-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.812942 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/84c903fa-ee30-441e-b8a2-b1e5825763cc-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.812979 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/84c903fa-ee30-441e-b8a2-b1e5825763cc-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.880763 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4s7rf" event={"ID":"101a48bf-f905-49c5-aa8f-c4f4b5555d8e","Type":"ContainerStarted","Data":"0df7f7312c34e47fb84e29c7ef9ac2f8317d42732c25b49008873e8ae446e2ab"} Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.880800 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4s7rf" event={"ID":"101a48bf-f905-49c5-aa8f-c4f4b5555d8e","Type":"ContainerStarted","Data":"dbedcc2db4e18e00f1c7b97c8371f1b80e5924c86e76898c69ce93f0b4a7a5e8"} Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.880812 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4s7rf" event={"ID":"101a48bf-f905-49c5-aa8f-c4f4b5555d8e","Type":"ContainerStarted","Data":"c295a22dd7ce1625a374ff9eeb2cd46d6460e563d9a52399ae9e07a02163f56a"} Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.881903 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mpdpm" event={"ID":"f61a03b7-8079-47cf-a798-ce9e271f58d9","Type":"ContainerStarted","Data":"aa4d2913f9c51a16dfe29a66689f4392f3eea88b852c54c4bf8fead8446bf08f"} Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.883222 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" event={"ID":"f97037e9-7ec8-4651-bd68-cb85c8b6d6d8","Type":"ContainerStarted","Data":"66e07b6a07b58e899be7cf5dc81b4a4c09e6dddec01e8cb8169177aff8238392"} Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.914860 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/84c903fa-ee30-441e-b8a2-b1e5825763cc-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.914917 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/84c903fa-ee30-441e-b8a2-b1e5825763cc-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.914960 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/84c903fa-ee30-441e-b8a2-b1e5825763cc-web-config\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.914977 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84c903fa-ee30-441e-b8a2-b1e5825763cc-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.915013 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cht5w\" (UniqueName: \"kubernetes.io/projected/84c903fa-ee30-441e-b8a2-b1e5825763cc-kube-api-access-cht5w\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.915034 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/84c903fa-ee30-441e-b8a2-b1e5825763cc-config-out\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.915053 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/84c903fa-ee30-441e-b8a2-b1e5825763cc-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.915072 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/84c903fa-ee30-441e-b8a2-b1e5825763cc-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.915088 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/84c903fa-ee30-441e-b8a2-b1e5825763cc-tls-assets\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.915105 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/84c903fa-ee30-441e-b8a2-b1e5825763cc-config-volume\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.915123 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/84c903fa-ee30-441e-b8a2-b1e5825763cc-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.915147 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/84c903fa-ee30-441e-b8a2-b1e5825763cc-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.915833 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/84c903fa-ee30-441e-b8a2-b1e5825763cc-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.918839 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84c903fa-ee30-441e-b8a2-b1e5825763cc-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.919417 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/84c903fa-ee30-441e-b8a2-b1e5825763cc-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.922855 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/84c903fa-ee30-441e-b8a2-b1e5825763cc-config-volume\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.922982 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/84c903fa-ee30-441e-b8a2-b1e5825763cc-tls-assets\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.923130 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/84c903fa-ee30-441e-b8a2-b1e5825763cc-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.924299 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/84c903fa-ee30-441e-b8a2-b1e5825763cc-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.924801 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/84c903fa-ee30-441e-b8a2-b1e5825763cc-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.925269 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/84c903fa-ee30-441e-b8a2-b1e5825763cc-web-config\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.926345 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/84c903fa-ee30-441e-b8a2-b1e5825763cc-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.939206 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cht5w\" (UniqueName: \"kubernetes.io/projected/84c903fa-ee30-441e-b8a2-b1e5825763cc-kube-api-access-cht5w\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:02.941688 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/84c903fa-ee30-441e-b8a2-b1e5825763cc-config-out\") pod \"alertmanager-main-0\" (UID: \"84c903fa-ee30-441e-b8a2-b1e5825763cc\") " pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.079514 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.592482 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.721611 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-7f697d8f45-x28ts"] Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.727078 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.734795 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.734859 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-ptxcx" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.735083 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.735255 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.735472 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.735623 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.736019 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-2f6htf3n23mnc" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.741712 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7f697d8f45-x28ts"] Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.855404 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdr9k\" (UniqueName: \"kubernetes.io/projected/3808e52c-3efa-4017-b799-bc195fd1d611-kube-api-access-sdr9k\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.855468 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/3808e52c-3efa-4017-b799-bc195fd1d611-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.855641 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/3808e52c-3efa-4017-b799-bc195fd1d611-secret-grpc-tls\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.855768 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3808e52c-3efa-4017-b799-bc195fd1d611-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.855827 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/3808e52c-3efa-4017-b799-bc195fd1d611-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.855878 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/3808e52c-3efa-4017-b799-bc195fd1d611-secret-thanos-querier-tls\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.855904 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3808e52c-3efa-4017-b799-bc195fd1d611-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.855971 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3808e52c-3efa-4017-b799-bc195fd1d611-metrics-client-ca\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: W1212 15:55:03.866081 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84c903fa_ee30_441e_b8a2_b1e5825763cc.slice/crio-2509c3880313e4d39ffb3b7fefb1006f45aae22c7f04784c49f894f9022d5bca WatchSource:0}: Error finding container 2509c3880313e4d39ffb3b7fefb1006f45aae22c7f04784c49f894f9022d5bca: Status 404 returned error can't find the container with id 2509c3880313e4d39ffb3b7fefb1006f45aae22c7f04784c49f894f9022d5bca Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.892796 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"84c903fa-ee30-441e-b8a2-b1e5825763cc","Type":"ContainerStarted","Data":"2509c3880313e4d39ffb3b7fefb1006f45aae22c7f04784c49f894f9022d5bca"} Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.957955 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3808e52c-3efa-4017-b799-bc195fd1d611-metrics-client-ca\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.958287 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdr9k\" (UniqueName: \"kubernetes.io/projected/3808e52c-3efa-4017-b799-bc195fd1d611-kube-api-access-sdr9k\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.958322 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/3808e52c-3efa-4017-b799-bc195fd1d611-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.958368 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/3808e52c-3efa-4017-b799-bc195fd1d611-secret-grpc-tls\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.958414 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3808e52c-3efa-4017-b799-bc195fd1d611-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.958437 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/3808e52c-3efa-4017-b799-bc195fd1d611-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.958458 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/3808e52c-3efa-4017-b799-bc195fd1d611-secret-thanos-querier-tls\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.958479 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3808e52c-3efa-4017-b799-bc195fd1d611-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.964069 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3808e52c-3efa-4017-b799-bc195fd1d611-metrics-client-ca\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.965815 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3808e52c-3efa-4017-b799-bc195fd1d611-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.966087 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/3808e52c-3efa-4017-b799-bc195fd1d611-secret-grpc-tls\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.976852 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3808e52c-3efa-4017-b799-bc195fd1d611-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.977036 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/3808e52c-3efa-4017-b799-bc195fd1d611-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.978872 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/3808e52c-3efa-4017-b799-bc195fd1d611-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.978985 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/3808e52c-3efa-4017-b799-bc195fd1d611-secret-thanos-querier-tls\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:03 crc kubenswrapper[4693]: I1212 15:55:03.980049 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdr9k\" (UniqueName: \"kubernetes.io/projected/3808e52c-3efa-4017-b799-bc195fd1d611-kube-api-access-sdr9k\") pod \"thanos-querier-7f697d8f45-x28ts\" (UID: \"3808e52c-3efa-4017-b799-bc195fd1d611\") " pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:04 crc kubenswrapper[4693]: I1212 15:55:04.066923 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:04 crc kubenswrapper[4693]: I1212 15:55:04.829801 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7f697d8f45-x28ts"] Dec 12 15:55:04 crc kubenswrapper[4693]: W1212 15:55:04.843231 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3808e52c_3efa_4017_b799_bc195fd1d611.slice/crio-af11a90f99d84409e9dae24a9ee0baf2e01bf4198c1d47843594885dc89a68b5 WatchSource:0}: Error finding container af11a90f99d84409e9dae24a9ee0baf2e01bf4198c1d47843594885dc89a68b5: Status 404 returned error can't find the container with id af11a90f99d84409e9dae24a9ee0baf2e01bf4198c1d47843594885dc89a68b5 Dec 12 15:55:04 crc kubenswrapper[4693]: I1212 15:55:04.912194 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" event={"ID":"3808e52c-3efa-4017-b799-bc195fd1d611","Type":"ContainerStarted","Data":"af11a90f99d84409e9dae24a9ee0baf2e01bf4198c1d47843594885dc89a68b5"} Dec 12 15:55:04 crc kubenswrapper[4693]: I1212 15:55:04.929948 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4s7rf" event={"ID":"101a48bf-f905-49c5-aa8f-c4f4b5555d8e","Type":"ContainerStarted","Data":"02f374e837903a1c702e99f0b6bdd046e13f3d7f62ed84c58764cc068793b2a3"} Dec 12 15:55:04 crc kubenswrapper[4693]: I1212 15:55:04.936673 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mpdpm" event={"ID":"f61a03b7-8079-47cf-a798-ce9e271f58d9","Type":"ContainerStarted","Data":"566fc2ec323498d19f0f01c00c6b42ea4460aa57b5cd22b71a69372fa78b6d1b"} Dec 12 15:55:04 crc kubenswrapper[4693]: I1212 15:55:04.943833 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" event={"ID":"f97037e9-7ec8-4651-bd68-cb85c8b6d6d8","Type":"ContainerStarted","Data":"c6f51aebffb94808e38024081ac3b43e0086b5b6276ab5f15da7094aae048774"} Dec 12 15:55:04 crc kubenswrapper[4693]: I1212 15:55:04.965435 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4s7rf" podStartSLOduration=2.145956825 podStartE2EDuration="3.965404114s" podCreationTimestamp="2025-12-12 15:55:01 +0000 UTC" firstStartedPulling="2025-12-12 15:55:02.783015997 +0000 UTC m=+529.951655598" lastFinishedPulling="2025-12-12 15:55:04.602463286 +0000 UTC m=+531.771102887" observedRunningTime="2025-12-12 15:55:04.963418221 +0000 UTC m=+532.132057822" watchObservedRunningTime="2025-12-12 15:55:04.965404114 +0000 UTC m=+532.134043725" Dec 12 15:55:05 crc kubenswrapper[4693]: I1212 15:55:05.952049 4693 generic.go:334] "Generic (PLEG): container finished" podID="84c903fa-ee30-441e-b8a2-b1e5825763cc" containerID="574a114f7d74001befcb2d2515755e623dfc57f20b5010bd8e3a780b7db3a7b0" exitCode=0 Dec 12 15:55:05 crc kubenswrapper[4693]: I1212 15:55:05.952114 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"84c903fa-ee30-441e-b8a2-b1e5825763cc","Type":"ContainerDied","Data":"574a114f7d74001befcb2d2515755e623dfc57f20b5010bd8e3a780b7db3a7b0"} Dec 12 15:55:05 crc kubenswrapper[4693]: I1212 15:55:05.955410 4693 generic.go:334] "Generic (PLEG): container finished" podID="f61a03b7-8079-47cf-a798-ce9e271f58d9" containerID="566fc2ec323498d19f0f01c00c6b42ea4460aa57b5cd22b71a69372fa78b6d1b" exitCode=0 Dec 12 15:55:05 crc kubenswrapper[4693]: I1212 15:55:05.955474 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mpdpm" event={"ID":"f61a03b7-8079-47cf-a798-ce9e271f58d9","Type":"ContainerDied","Data":"566fc2ec323498d19f0f01c00c6b42ea4460aa57b5cd22b71a69372fa78b6d1b"} Dec 12 15:55:05 crc kubenswrapper[4693]: I1212 15:55:05.959172 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" event={"ID":"f97037e9-7ec8-4651-bd68-cb85c8b6d6d8","Type":"ContainerStarted","Data":"103e9971cc039b3ef69a9ec136db3e47ebae2faf2f9183374c23dbb5b06a7764"} Dec 12 15:55:05 crc kubenswrapper[4693]: I1212 15:55:05.959221 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" event={"ID":"f97037e9-7ec8-4651-bd68-cb85c8b6d6d8","Type":"ContainerStarted","Data":"c39bbe7c5f577890264ae26718a2fc87548e5148a56dcc9aff55f19b0b6dc9be"} Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.021971 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9xjzx" podStartSLOduration=2.789721381 podStartE2EDuration="5.021951996s" podCreationTimestamp="2025-12-12 15:55:01 +0000 UTC" firstStartedPulling="2025-12-12 15:55:02.343232922 +0000 UTC m=+529.511872523" lastFinishedPulling="2025-12-12 15:55:04.575463537 +0000 UTC m=+531.744103138" observedRunningTime="2025-12-12 15:55:06.017509546 +0000 UTC m=+533.186149147" watchObservedRunningTime="2025-12-12 15:55:06.021951996 +0000 UTC m=+533.190591597" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.446317 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5749ff4bc8-hnj69"] Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.447599 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.464434 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5749ff4bc8-hnj69"] Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.492132 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-oauth-serving-cert\") pod \"console-5749ff4bc8-hnj69\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.492182 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-console-config\") pod \"console-5749ff4bc8-hnj69\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.492243 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-trusted-ca-bundle\") pod \"console-5749ff4bc8-hnj69\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.492335 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-console-serving-cert\") pod \"console-5749ff4bc8-hnj69\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.492371 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-console-oauth-config\") pod \"console-5749ff4bc8-hnj69\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.492437 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwzfb\" (UniqueName: \"kubernetes.io/projected/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-kube-api-access-pwzfb\") pod \"console-5749ff4bc8-hnj69\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.492496 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-service-ca\") pod \"console-5749ff4bc8-hnj69\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.593527 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-oauth-serving-cert\") pod \"console-5749ff4bc8-hnj69\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.595070 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-console-config\") pod \"console-5749ff4bc8-hnj69\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.595015 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-oauth-serving-cert\") pod \"console-5749ff4bc8-hnj69\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.595149 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-trusted-ca-bundle\") pod \"console-5749ff4bc8-hnj69\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.595681 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-console-config\") pod \"console-5749ff4bc8-hnj69\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.595748 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-console-serving-cert\") pod \"console-5749ff4bc8-hnj69\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.595770 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-console-oauth-config\") pod \"console-5749ff4bc8-hnj69\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.595830 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwzfb\" (UniqueName: \"kubernetes.io/projected/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-kube-api-access-pwzfb\") pod \"console-5749ff4bc8-hnj69\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.595872 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-service-ca\") pod \"console-5749ff4bc8-hnj69\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.596165 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-trusted-ca-bundle\") pod \"console-5749ff4bc8-hnj69\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.596565 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-service-ca\") pod \"console-5749ff4bc8-hnj69\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.600924 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-console-oauth-config\") pod \"console-5749ff4bc8-hnj69\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.611223 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-console-serving-cert\") pod \"console-5749ff4bc8-hnj69\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.612265 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwzfb\" (UniqueName: \"kubernetes.io/projected/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-kube-api-access-pwzfb\") pod \"console-5749ff4bc8-hnj69\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.770924 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.970108 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mpdpm" event={"ID":"f61a03b7-8079-47cf-a798-ce9e271f58d9","Type":"ContainerStarted","Data":"126b2f6d32b27e74e685f7468be76f17559cdffae8baf124478c07489f621292"} Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.970508 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mpdpm" event={"ID":"f61a03b7-8079-47cf-a798-ce9e271f58d9","Type":"ContainerStarted","Data":"592b26e8de52cc6b8c523c3455c09ca7e58ae62895d0c674e7b6dff0ce6a1520"} Dec 12 15:55:06 crc kubenswrapper[4693]: I1212 15:55:06.993186 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-mpdpm" podStartSLOduration=3.499210916 podStartE2EDuration="5.993168012s" podCreationTimestamp="2025-12-12 15:55:01 +0000 UTC" firstStartedPulling="2025-12-12 15:55:02.081578562 +0000 UTC m=+529.250218163" lastFinishedPulling="2025-12-12 15:55:04.575535658 +0000 UTC m=+531.744175259" observedRunningTime="2025-12-12 15:55:06.990750097 +0000 UTC m=+534.159389698" watchObservedRunningTime="2025-12-12 15:55:06.993168012 +0000 UTC m=+534.161807613" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.048414 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-66fcfb545d-whswt"] Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.049120 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.052101 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-9vpci3rgo0ipm" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.052293 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-9brkw" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.052326 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.052301 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.052929 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.053063 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.066638 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-66fcfb545d-whswt"] Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.102686 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0-audit-log\") pod \"metrics-server-66fcfb545d-whswt\" (UID: \"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0\") " pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.102739 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-66fcfb545d-whswt\" (UID: \"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0\") " pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.103005 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srftj\" (UniqueName: \"kubernetes.io/projected/cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0-kube-api-access-srftj\") pod \"metrics-server-66fcfb545d-whswt\" (UID: \"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0\") " pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.103208 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0-metrics-server-audit-profiles\") pod \"metrics-server-66fcfb545d-whswt\" (UID: \"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0\") " pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.103323 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0-client-ca-bundle\") pod \"metrics-server-66fcfb545d-whswt\" (UID: \"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0\") " pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.103424 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0-secret-metrics-server-tls\") pod \"metrics-server-66fcfb545d-whswt\" (UID: \"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0\") " pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.103516 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0-secret-metrics-client-certs\") pod \"metrics-server-66fcfb545d-whswt\" (UID: \"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0\") " pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.160425 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5749ff4bc8-hnj69"] Dec 12 15:55:07 crc kubenswrapper[4693]: W1212 15:55:07.168026 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1389d6fe_36fc_4b22_b9f3_d1f12e2fac85.slice/crio-c2c586c4dff1cec489740e3279ce89da0c213673ec03fdf66d98f2685be96076 WatchSource:0}: Error finding container c2c586c4dff1cec489740e3279ce89da0c213673ec03fdf66d98f2685be96076: Status 404 returned error can't find the container with id c2c586c4dff1cec489740e3279ce89da0c213673ec03fdf66d98f2685be96076 Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.205653 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0-secret-metrics-client-certs\") pod \"metrics-server-66fcfb545d-whswt\" (UID: \"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0\") " pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.205725 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-66fcfb545d-whswt\" (UID: \"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0\") " pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.205755 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0-audit-log\") pod \"metrics-server-66fcfb545d-whswt\" (UID: \"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0\") " pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.205813 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srftj\" (UniqueName: \"kubernetes.io/projected/cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0-kube-api-access-srftj\") pod \"metrics-server-66fcfb545d-whswt\" (UID: \"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0\") " pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.205875 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0-metrics-server-audit-profiles\") pod \"metrics-server-66fcfb545d-whswt\" (UID: \"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0\") " pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.205917 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0-client-ca-bundle\") pod \"metrics-server-66fcfb545d-whswt\" (UID: \"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0\") " pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.205960 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0-secret-metrics-server-tls\") pod \"metrics-server-66fcfb545d-whswt\" (UID: \"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0\") " pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.206472 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0-audit-log\") pod \"metrics-server-66fcfb545d-whswt\" (UID: \"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0\") " pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.207422 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0-metrics-server-audit-profiles\") pod \"metrics-server-66fcfb545d-whswt\" (UID: \"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0\") " pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.207852 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-66fcfb545d-whswt\" (UID: \"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0\") " pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.210815 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0-secret-metrics-client-certs\") pod \"metrics-server-66fcfb545d-whswt\" (UID: \"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0\") " pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.211107 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0-client-ca-bundle\") pod \"metrics-server-66fcfb545d-whswt\" (UID: \"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0\") " pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.211160 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0-secret-metrics-server-tls\") pod \"metrics-server-66fcfb545d-whswt\" (UID: \"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0\") " pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.223468 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srftj\" (UniqueName: \"kubernetes.io/projected/cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0-kube-api-access-srftj\") pod \"metrics-server-66fcfb545d-whswt\" (UID: \"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0\") " pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.377450 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.421430 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-78f748b45-xcpg8"] Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.422632 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-78f748b45-xcpg8" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.425753 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.426020 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.433568 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-78f748b45-xcpg8"] Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.510435 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f267d07b-2357-45ee-999d-94fad4f7bbce-monitoring-plugin-cert\") pod \"monitoring-plugin-78f748b45-xcpg8\" (UID: \"f267d07b-2357-45ee-999d-94fad4f7bbce\") " pod="openshift-monitoring/monitoring-plugin-78f748b45-xcpg8" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.611580 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f267d07b-2357-45ee-999d-94fad4f7bbce-monitoring-plugin-cert\") pod \"monitoring-plugin-78f748b45-xcpg8\" (UID: \"f267d07b-2357-45ee-999d-94fad4f7bbce\") " pod="openshift-monitoring/monitoring-plugin-78f748b45-xcpg8" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.622129 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f267d07b-2357-45ee-999d-94fad4f7bbce-monitoring-plugin-cert\") pod \"monitoring-plugin-78f748b45-xcpg8\" (UID: \"f267d07b-2357-45ee-999d-94fad4f7bbce\") " pod="openshift-monitoring/monitoring-plugin-78f748b45-xcpg8" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.738966 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-78f748b45-xcpg8" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.960070 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.968911 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.972477 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.974370 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.974503 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.974588 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.974643 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.974992 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.975167 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-8on232nh7247a" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.975376 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.975559 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-mpd7l" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.978109 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.978534 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.986761 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.998308 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Dec 12 15:55:07 crc kubenswrapper[4693]: I1212 15:55:07.998721 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.009366 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5749ff4bc8-hnj69" event={"ID":"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85","Type":"ContainerStarted","Data":"ec4b0a34f5409f41959bb7722d795421a378f9a3bbecb2d221f10cd0380ddb82"} Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.009420 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5749ff4bc8-hnj69" event={"ID":"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85","Type":"ContainerStarted","Data":"c2c586c4dff1cec489740e3279ce89da0c213673ec03fdf66d98f2685be96076"} Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.016640 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cebb8554-fc88-4ab5-b2d9-61495b3648f6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.016689 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cebb8554-fc88-4ab5-b2d9-61495b3648f6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.016715 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.016732 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cebb8554-fc88-4ab5-b2d9-61495b3648f6-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.016750 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.016777 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.016799 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cebb8554-fc88-4ab5-b2d9-61495b3648f6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.016815 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-web-config\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.016834 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cebb8554-fc88-4ab5-b2d9-61495b3648f6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.016853 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cebb8554-fc88-4ab5-b2d9-61495b3648f6-config-out\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.016875 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.016890 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-config\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.016905 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.016925 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/cebb8554-fc88-4ab5-b2d9-61495b3648f6-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.016941 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.016961 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cebb8554-fc88-4ab5-b2d9-61495b3648f6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.016982 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb2pv\" (UniqueName: \"kubernetes.io/projected/cebb8554-fc88-4ab5-b2d9-61495b3648f6-kube-api-access-hb2pv\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.017012 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.118330 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cebb8554-fc88-4ab5-b2d9-61495b3648f6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.118380 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb2pv\" (UniqueName: \"kubernetes.io/projected/cebb8554-fc88-4ab5-b2d9-61495b3648f6-kube-api-access-hb2pv\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.118415 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.118447 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cebb8554-fc88-4ab5-b2d9-61495b3648f6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.118495 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cebb8554-fc88-4ab5-b2d9-61495b3648f6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.118528 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.118555 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cebb8554-fc88-4ab5-b2d9-61495b3648f6-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.118575 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.118657 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.118675 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cebb8554-fc88-4ab5-b2d9-61495b3648f6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.118694 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-web-config\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.118716 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cebb8554-fc88-4ab5-b2d9-61495b3648f6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.118735 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cebb8554-fc88-4ab5-b2d9-61495b3648f6-config-out\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.118774 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.118791 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-config\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.118810 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.118838 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/cebb8554-fc88-4ab5-b2d9-61495b3648f6-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.118858 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.119814 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cebb8554-fc88-4ab5-b2d9-61495b3648f6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.120506 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cebb8554-fc88-4ab5-b2d9-61495b3648f6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.120649 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cebb8554-fc88-4ab5-b2d9-61495b3648f6-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.122719 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.123015 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.123652 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cebb8554-fc88-4ab5-b2d9-61495b3648f6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.123748 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/cebb8554-fc88-4ab5-b2d9-61495b3648f6-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.124412 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cebb8554-fc88-4ab5-b2d9-61495b3648f6-config-out\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.125172 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.125698 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.127722 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cebb8554-fc88-4ab5-b2d9-61495b3648f6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.130344 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.130472 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cebb8554-fc88-4ab5-b2d9-61495b3648f6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.133754 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-config\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.134288 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-web-config\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.135990 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.140326 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/cebb8554-fc88-4ab5-b2d9-61495b3648f6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.149177 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb2pv\" (UniqueName: \"kubernetes.io/projected/cebb8554-fc88-4ab5-b2d9-61495b3648f6-kube-api-access-hb2pv\") pod \"prometheus-k8s-0\" (UID: \"cebb8554-fc88-4ab5-b2d9-61495b3648f6\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:08 crc kubenswrapper[4693]: I1212 15:55:08.321936 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:10 crc kubenswrapper[4693]: I1212 15:55:10.430799 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5749ff4bc8-hnj69" podStartSLOduration=4.43077948 podStartE2EDuration="4.43077948s" podCreationTimestamp="2025-12-12 15:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:55:08.035195352 +0000 UTC m=+535.203834953" watchObservedRunningTime="2025-12-12 15:55:10.43077948 +0000 UTC m=+537.599419081" Dec 12 15:55:10 crc kubenswrapper[4693]: I1212 15:55:10.435001 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Dec 12 15:55:10 crc kubenswrapper[4693]: I1212 15:55:10.468126 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-78f748b45-xcpg8"] Dec 12 15:55:10 crc kubenswrapper[4693]: I1212 15:55:10.472898 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-66fcfb545d-whswt"] Dec 12 15:55:10 crc kubenswrapper[4693]: W1212 15:55:10.482116 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbb8e58d_1fc6_4b66_82b3_ad43d71c4ce0.slice/crio-c8c1dd127ee8d5565097d4150b74bac0426c10dfb5af63070145437367e8ac58 WatchSource:0}: Error finding container c8c1dd127ee8d5565097d4150b74bac0426c10dfb5af63070145437367e8ac58: Status 404 returned error can't find the container with id c8c1dd127ee8d5565097d4150b74bac0426c10dfb5af63070145437367e8ac58 Dec 12 15:55:10 crc kubenswrapper[4693]: W1212 15:55:10.485805 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf267d07b_2357_45ee_999d_94fad4f7bbce.slice/crio-5181a1456ea7593fda44b4e2e890c140efda7944a86c76208103b78b0281d717 WatchSource:0}: Error finding container 5181a1456ea7593fda44b4e2e890c140efda7944a86c76208103b78b0281d717: Status 404 returned error can't find the container with id 5181a1456ea7593fda44b4e2e890c140efda7944a86c76208103b78b0281d717 Dec 12 15:55:11 crc kubenswrapper[4693]: I1212 15:55:11.029063 4693 generic.go:334] "Generic (PLEG): container finished" podID="cebb8554-fc88-4ab5-b2d9-61495b3648f6" containerID="980408189a8da34eda8efb3207687b75f67351af27607cdd17fce46b9fd5da96" exitCode=0 Dec 12 15:55:11 crc kubenswrapper[4693]: I1212 15:55:11.029130 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"cebb8554-fc88-4ab5-b2d9-61495b3648f6","Type":"ContainerDied","Data":"980408189a8da34eda8efb3207687b75f67351af27607cdd17fce46b9fd5da96"} Dec 12 15:55:11 crc kubenswrapper[4693]: I1212 15:55:11.029214 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"cebb8554-fc88-4ab5-b2d9-61495b3648f6","Type":"ContainerStarted","Data":"734a101ae339f787c9d6c168e9848268dbdbe2434d917015317256df147a4cc7"} Dec 12 15:55:11 crc kubenswrapper[4693]: I1212 15:55:11.030304 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-78f748b45-xcpg8" event={"ID":"f267d07b-2357-45ee-999d-94fad4f7bbce","Type":"ContainerStarted","Data":"5181a1456ea7593fda44b4e2e890c140efda7944a86c76208103b78b0281d717"} Dec 12 15:55:11 crc kubenswrapper[4693]: I1212 15:55:11.033698 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"84c903fa-ee30-441e-b8a2-b1e5825763cc","Type":"ContainerStarted","Data":"90223ffdaf5c0224cb9afea471fded99c60e8d52f38241921af6e79b843eebb8"} Dec 12 15:55:11 crc kubenswrapper[4693]: I1212 15:55:11.033733 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"84c903fa-ee30-441e-b8a2-b1e5825763cc","Type":"ContainerStarted","Data":"56a1f2115dcbe9967200029fe33551a8acfad453fc1432b04a0a3c8f71034106"} Dec 12 15:55:11 crc kubenswrapper[4693]: I1212 15:55:11.033751 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"84c903fa-ee30-441e-b8a2-b1e5825763cc","Type":"ContainerStarted","Data":"e0899b5661eb8c4775c755f3525bbf37ec1c0d9c61d28ccd0935f9adf88bda98"} Dec 12 15:55:11 crc kubenswrapper[4693]: I1212 15:55:11.034969 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" event={"ID":"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0","Type":"ContainerStarted","Data":"c8c1dd127ee8d5565097d4150b74bac0426c10dfb5af63070145437367e8ac58"} Dec 12 15:55:11 crc kubenswrapper[4693]: I1212 15:55:11.037139 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" event={"ID":"3808e52c-3efa-4017-b799-bc195fd1d611","Type":"ContainerStarted","Data":"03fb824dbd093645c8416d02c77adb8b65905a176d79055dfcf14d9595b3666c"} Dec 12 15:55:11 crc kubenswrapper[4693]: I1212 15:55:11.037171 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" event={"ID":"3808e52c-3efa-4017-b799-bc195fd1d611","Type":"ContainerStarted","Data":"d75464847e395410506b10b16826d44b955471495c636efd5f16030362a76b5e"} Dec 12 15:55:11 crc kubenswrapper[4693]: I1212 15:55:11.037182 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" event={"ID":"3808e52c-3efa-4017-b799-bc195fd1d611","Type":"ContainerStarted","Data":"ffa38e8243acb6811d4f162736b826fcf70a449f93ede9c22af30ecb0d6079cc"} Dec 12 15:55:12 crc kubenswrapper[4693]: I1212 15:55:12.046278 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"84c903fa-ee30-441e-b8a2-b1e5825763cc","Type":"ContainerStarted","Data":"2aef525a743bc13232257fa0d250a346a7c4ff13fa6e5c4d45bd1422800c79f4"} Dec 12 15:55:12 crc kubenswrapper[4693]: I1212 15:55:12.046606 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"84c903fa-ee30-441e-b8a2-b1e5825763cc","Type":"ContainerStarted","Data":"b2a12dcee6073e1cca5209045bfa20f68b4e1494cdd3faac1d980eb542e6c3eb"} Dec 12 15:55:16 crc kubenswrapper[4693]: I1212 15:55:16.077894 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" event={"ID":"3808e52c-3efa-4017-b799-bc195fd1d611","Type":"ContainerStarted","Data":"386efaac33c13bd5b859bd79b9b1844193d0992673f160cc9b011280717659b9"} Dec 12 15:55:16 crc kubenswrapper[4693]: I1212 15:55:16.078482 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:16 crc kubenswrapper[4693]: I1212 15:55:16.078528 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" event={"ID":"3808e52c-3efa-4017-b799-bc195fd1d611","Type":"ContainerStarted","Data":"e2b0a84c7e76dc45eb1050ed7e2d2d464df7c22faa9f4ee4d50c03d0e7a81963"} Dec 12 15:55:16 crc kubenswrapper[4693]: I1212 15:55:16.078540 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" event={"ID":"3808e52c-3efa-4017-b799-bc195fd1d611","Type":"ContainerStarted","Data":"032c4770ea60a0c27b0beecbbe974ae45a5535051f062a6d44396fa38ab1ae0d"} Dec 12 15:55:16 crc kubenswrapper[4693]: I1212 15:55:16.081001 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-78f748b45-xcpg8" event={"ID":"f267d07b-2357-45ee-999d-94fad4f7bbce","Type":"ContainerStarted","Data":"f1b1daca91f861961b095372d2287537d893183be09fb743677c7c3e48e22560"} Dec 12 15:55:16 crc kubenswrapper[4693]: I1212 15:55:16.081306 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-78f748b45-xcpg8" Dec 12 15:55:16 crc kubenswrapper[4693]: I1212 15:55:16.087408 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-78f748b45-xcpg8" Dec 12 15:55:16 crc kubenswrapper[4693]: I1212 15:55:16.087691 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" Dec 12 15:55:16 crc kubenswrapper[4693]: I1212 15:55:16.089567 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"84c903fa-ee30-441e-b8a2-b1e5825763cc","Type":"ContainerStarted","Data":"6b9026051afb2378fcef2a39ccbf8703ab86f947759dfd3b834554aaa951ae6a"} Dec 12 15:55:16 crc kubenswrapper[4693]: I1212 15:55:16.095491 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" event={"ID":"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0","Type":"ContainerStarted","Data":"5915a36368dfe5a7b12fb280ceb2ddc12c9a9dadd30ab5b7207f97204a71c11e"} Dec 12 15:55:16 crc kubenswrapper[4693]: I1212 15:55:16.100293 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" podStartSLOduration=2.628573806 podStartE2EDuration="13.100255602s" podCreationTimestamp="2025-12-12 15:55:03 +0000 UTC" firstStartedPulling="2025-12-12 15:55:04.850181791 +0000 UTC m=+532.018821402" lastFinishedPulling="2025-12-12 15:55:15.321863597 +0000 UTC m=+542.490503198" observedRunningTime="2025-12-12 15:55:16.09905181 +0000 UTC m=+543.267691421" watchObservedRunningTime="2025-12-12 15:55:16.100255602 +0000 UTC m=+543.268895203" Dec 12 15:55:16 crc kubenswrapper[4693]: I1212 15:55:16.116729 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-78f748b45-xcpg8" podStartSLOduration=4.525101242 podStartE2EDuration="9.116712537s" podCreationTimestamp="2025-12-12 15:55:07 +0000 UTC" firstStartedPulling="2025-12-12 15:55:10.488420397 +0000 UTC m=+537.657059988" lastFinishedPulling="2025-12-12 15:55:15.080031692 +0000 UTC m=+542.248671283" observedRunningTime="2025-12-12 15:55:16.115396952 +0000 UTC m=+543.284036553" watchObservedRunningTime="2025-12-12 15:55:16.116712537 +0000 UTC m=+543.285352138" Dec 12 15:55:16 crc kubenswrapper[4693]: I1212 15:55:16.158574 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.705859841 podStartE2EDuration="14.158548238s" podCreationTimestamp="2025-12-12 15:55:02 +0000 UTC" firstStartedPulling="2025-12-12 15:55:03.869143019 +0000 UTC m=+531.037782620" lastFinishedPulling="2025-12-12 15:55:15.321831416 +0000 UTC m=+542.490471017" observedRunningTime="2025-12-12 15:55:16.152434383 +0000 UTC m=+543.321073984" watchObservedRunningTime="2025-12-12 15:55:16.158548238 +0000 UTC m=+543.327187849" Dec 12 15:55:16 crc kubenswrapper[4693]: I1212 15:55:16.189091 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" podStartSLOduration=4.596045061 podStartE2EDuration="9.189073613s" podCreationTimestamp="2025-12-12 15:55:07 +0000 UTC" firstStartedPulling="2025-12-12 15:55:10.487096982 +0000 UTC m=+537.655736583" lastFinishedPulling="2025-12-12 15:55:15.080125534 +0000 UTC m=+542.248765135" observedRunningTime="2025-12-12 15:55:16.188040205 +0000 UTC m=+543.356679806" watchObservedRunningTime="2025-12-12 15:55:16.189073613 +0000 UTC m=+543.357713214" Dec 12 15:55:16 crc kubenswrapper[4693]: I1212 15:55:16.774289 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:16 crc kubenswrapper[4693]: I1212 15:55:16.774355 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:16 crc kubenswrapper[4693]: I1212 15:55:16.779096 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:17 crc kubenswrapper[4693]: I1212 15:55:17.108313 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:55:17 crc kubenswrapper[4693]: I1212 15:55:17.165505 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-b7rfx"] Dec 12 15:55:19 crc kubenswrapper[4693]: I1212 15:55:19.124628 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"cebb8554-fc88-4ab5-b2d9-61495b3648f6","Type":"ContainerStarted","Data":"8ca60fa11c0e4268c3fce82e0dc7d4a0c460ca6a5db188c4993ef25f51fc76d2"} Dec 12 15:55:19 crc kubenswrapper[4693]: I1212 15:55:19.125115 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"cebb8554-fc88-4ab5-b2d9-61495b3648f6","Type":"ContainerStarted","Data":"e6d2bb3eae27bd0e0d070de394306b6417aba02117911eac9225e324be78d7b3"} Dec 12 15:55:19 crc kubenswrapper[4693]: I1212 15:55:19.125132 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"cebb8554-fc88-4ab5-b2d9-61495b3648f6","Type":"ContainerStarted","Data":"17d6c531e0ec0bbc3b6a4f258739be4f909cdf98dc5d1e2954ba98623f4bd66d"} Dec 12 15:55:19 crc kubenswrapper[4693]: I1212 15:55:19.125146 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"cebb8554-fc88-4ab5-b2d9-61495b3648f6","Type":"ContainerStarted","Data":"7965a9f4ae4b364675bb3148179c7fe634d5a7309d2232aa6ddf65d99e7d709a"} Dec 12 15:55:19 crc kubenswrapper[4693]: I1212 15:55:19.125159 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"cebb8554-fc88-4ab5-b2d9-61495b3648f6","Type":"ContainerStarted","Data":"6e1933c79b7d06a89160e1117b4f9a39d984a936f3aa43c16223953fa8833a6e"} Dec 12 15:55:19 crc kubenswrapper[4693]: I1212 15:55:19.125174 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"cebb8554-fc88-4ab5-b2d9-61495b3648f6","Type":"ContainerStarted","Data":"38f46f27934d7560e25485590c60d3340044cfd4519edbc72640c910d434db2a"} Dec 12 15:55:19 crc kubenswrapper[4693]: I1212 15:55:19.155696 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=5.061433827 podStartE2EDuration="12.155676172s" podCreationTimestamp="2025-12-12 15:55:07 +0000 UTC" firstStartedPulling="2025-12-12 15:55:11.030645771 +0000 UTC m=+538.199285382" lastFinishedPulling="2025-12-12 15:55:18.124888106 +0000 UTC m=+545.293527727" observedRunningTime="2025-12-12 15:55:19.154652274 +0000 UTC m=+546.323291895" watchObservedRunningTime="2025-12-12 15:55:19.155676172 +0000 UTC m=+546.324315783" Dec 12 15:55:23 crc kubenswrapper[4693]: I1212 15:55:23.323059 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:55:27 crc kubenswrapper[4693]: I1212 15:55:27.379071 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:27 crc kubenswrapper[4693]: I1212 15:55:27.379449 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:42 crc kubenswrapper[4693]: I1212 15:55:42.236809 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-b7rfx" podUID="c9efa1e6-826d-4d2f-8c65-5993738eb0b9" containerName="console" containerID="cri-o://2504ee2eb534663ae4128b9c6c7104560c512cd6996038a2b50187792e4c7039" gracePeriod=15 Dec 12 15:55:42 crc kubenswrapper[4693]: I1212 15:55:42.530833 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:55:42 crc kubenswrapper[4693]: I1212 15:55:42.530889 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.286405 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-b7rfx_c9efa1e6-826d-4d2f-8c65-5993738eb0b9/console/0.log" Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.286790 4693 generic.go:334] "Generic (PLEG): container finished" podID="c9efa1e6-826d-4d2f-8c65-5993738eb0b9" containerID="2504ee2eb534663ae4128b9c6c7104560c512cd6996038a2b50187792e4c7039" exitCode=2 Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.286832 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b7rfx" event={"ID":"c9efa1e6-826d-4d2f-8c65-5993738eb0b9","Type":"ContainerDied","Data":"2504ee2eb534663ae4128b9c6c7104560c512cd6996038a2b50187792e4c7039"} Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.369971 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-b7rfx_c9efa1e6-826d-4d2f-8c65-5993738eb0b9/console/0.log" Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.370048 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.479953 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2qhw\" (UniqueName: \"kubernetes.io/projected/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-kube-api-access-f2qhw\") pod \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.480047 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-console-serving-cert\") pod \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.480080 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-trusted-ca-bundle\") pod \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.480111 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-oauth-serving-cert\") pod \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.480145 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-service-ca\") pod \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.480358 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-console-config\") pod \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.480394 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-console-oauth-config\") pod \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\" (UID: \"c9efa1e6-826d-4d2f-8c65-5993738eb0b9\") " Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.480961 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-console-config" (OuterVolumeSpecName: "console-config") pod "c9efa1e6-826d-4d2f-8c65-5993738eb0b9" (UID: "c9efa1e6-826d-4d2f-8c65-5993738eb0b9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.480986 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c9efa1e6-826d-4d2f-8c65-5993738eb0b9" (UID: "c9efa1e6-826d-4d2f-8c65-5993738eb0b9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.481000 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c9efa1e6-826d-4d2f-8c65-5993738eb0b9" (UID: "c9efa1e6-826d-4d2f-8c65-5993738eb0b9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.481038 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-service-ca" (OuterVolumeSpecName: "service-ca") pod "c9efa1e6-826d-4d2f-8c65-5993738eb0b9" (UID: "c9efa1e6-826d-4d2f-8c65-5993738eb0b9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.481499 4693 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.481519 4693 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.481533 4693 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.481545 4693 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-console-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.486070 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-kube-api-access-f2qhw" (OuterVolumeSpecName: "kube-api-access-f2qhw") pod "c9efa1e6-826d-4d2f-8c65-5993738eb0b9" (UID: "c9efa1e6-826d-4d2f-8c65-5993738eb0b9"). InnerVolumeSpecName "kube-api-access-f2qhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.486302 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c9efa1e6-826d-4d2f-8c65-5993738eb0b9" (UID: "c9efa1e6-826d-4d2f-8c65-5993738eb0b9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.486724 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c9efa1e6-826d-4d2f-8c65-5993738eb0b9" (UID: "c9efa1e6-826d-4d2f-8c65-5993738eb0b9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.582584 4693 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.582692 4693 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:55:45 crc kubenswrapper[4693]: I1212 15:55:45.582712 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2qhw\" (UniqueName: \"kubernetes.io/projected/c9efa1e6-826d-4d2f-8c65-5993738eb0b9-kube-api-access-f2qhw\") on node \"crc\" DevicePath \"\"" Dec 12 15:55:46 crc kubenswrapper[4693]: I1212 15:55:46.295721 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-b7rfx_c9efa1e6-826d-4d2f-8c65-5993738eb0b9/console/0.log" Dec 12 15:55:46 crc kubenswrapper[4693]: I1212 15:55:46.295782 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b7rfx" event={"ID":"c9efa1e6-826d-4d2f-8c65-5993738eb0b9","Type":"ContainerDied","Data":"e8b68f4e5cb03a30f10c73c644d874372f922f080c66470b38185b25c38d2d12"} Dec 12 15:55:46 crc kubenswrapper[4693]: I1212 15:55:46.295830 4693 scope.go:117] "RemoveContainer" containerID="2504ee2eb534663ae4128b9c6c7104560c512cd6996038a2b50187792e4c7039" Dec 12 15:55:46 crc kubenswrapper[4693]: I1212 15:55:46.295860 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b7rfx" Dec 12 15:55:46 crc kubenswrapper[4693]: I1212 15:55:46.334517 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-b7rfx"] Dec 12 15:55:46 crc kubenswrapper[4693]: I1212 15:55:46.344561 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-b7rfx"] Dec 12 15:55:47 crc kubenswrapper[4693]: I1212 15:55:47.367906 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9efa1e6-826d-4d2f-8c65-5993738eb0b9" path="/var/lib/kubelet/pods/c9efa1e6-826d-4d2f-8c65-5993738eb0b9/volumes" Dec 12 15:55:47 crc kubenswrapper[4693]: I1212 15:55:47.389737 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:55:47 crc kubenswrapper[4693]: I1212 15:55:47.394466 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 15:56:08 crc kubenswrapper[4693]: I1212 15:56:08.322820 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:56:08 crc kubenswrapper[4693]: I1212 15:56:08.361352 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:56:08 crc kubenswrapper[4693]: I1212 15:56:08.458838 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Dec 12 15:56:12 crc kubenswrapper[4693]: I1212 15:56:12.531330 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:56:12 crc kubenswrapper[4693]: I1212 15:56:12.532033 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:56:42 crc kubenswrapper[4693]: I1212 15:56:42.531091 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:56:42 crc kubenswrapper[4693]: I1212 15:56:42.532465 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:56:42 crc kubenswrapper[4693]: I1212 15:56:42.532559 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 15:56:42 crc kubenswrapper[4693]: I1212 15:56:42.533295 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13e55d11ade86a76f1b5f387056a50a27a64fcc2f93e4354f4e32727ed6ed0c7"} pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 15:56:42 crc kubenswrapper[4693]: I1212 15:56:42.533366 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" containerID="cri-o://13e55d11ade86a76f1b5f387056a50a27a64fcc2f93e4354f4e32727ed6ed0c7" gracePeriod=600 Dec 12 15:56:42 crc kubenswrapper[4693]: I1212 15:56:42.656729 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerDied","Data":"13e55d11ade86a76f1b5f387056a50a27a64fcc2f93e4354f4e32727ed6ed0c7"} Dec 12 15:56:42 crc kubenswrapper[4693]: I1212 15:56:42.656792 4693 scope.go:117] "RemoveContainer" containerID="5ff91bd354fd1b1d52f5914f816ce98932ace1f4aced9a2d721aa0982cc50f10" Dec 12 15:56:42 crc kubenswrapper[4693]: I1212 15:56:42.656668 4693 generic.go:334] "Generic (PLEG): container finished" podID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerID="13e55d11ade86a76f1b5f387056a50a27a64fcc2f93e4354f4e32727ed6ed0c7" exitCode=0 Dec 12 15:56:43 crc kubenswrapper[4693]: I1212 15:56:43.667772 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerStarted","Data":"74051a73f37429f331f62089999985291d71febca1fcfa63c6624e65d5235174"} Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.281082 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7b8649454d-tq4z4"] Dec 12 15:57:00 crc kubenswrapper[4693]: E1212 15:57:00.282452 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9efa1e6-826d-4d2f-8c65-5993738eb0b9" containerName="console" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.282479 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9efa1e6-826d-4d2f-8c65-5993738eb0b9" containerName="console" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.282708 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9efa1e6-826d-4d2f-8c65-5993738eb0b9" containerName="console" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.283399 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.294009 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b8649454d-tq4z4"] Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.374051 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-trusted-ca-bundle\") pod \"console-7b8649454d-tq4z4\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.374097 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/41ef7a5e-ff14-4a24-9aae-357f1f06e874-console-serving-cert\") pod \"console-7b8649454d-tq4z4\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.374121 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-console-config\") pod \"console-7b8649454d-tq4z4\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.374206 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-oauth-serving-cert\") pod \"console-7b8649454d-tq4z4\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.374228 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tkls\" (UniqueName: \"kubernetes.io/projected/41ef7a5e-ff14-4a24-9aae-357f1f06e874-kube-api-access-7tkls\") pod \"console-7b8649454d-tq4z4\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.374254 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/41ef7a5e-ff14-4a24-9aae-357f1f06e874-console-oauth-config\") pod \"console-7b8649454d-tq4z4\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.374313 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-service-ca\") pod \"console-7b8649454d-tq4z4\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.475338 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-trusted-ca-bundle\") pod \"console-7b8649454d-tq4z4\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.475443 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/41ef7a5e-ff14-4a24-9aae-357f1f06e874-console-serving-cert\") pod \"console-7b8649454d-tq4z4\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.475478 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-console-config\") pod \"console-7b8649454d-tq4z4\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.475563 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-oauth-serving-cert\") pod \"console-7b8649454d-tq4z4\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.475591 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tkls\" (UniqueName: \"kubernetes.io/projected/41ef7a5e-ff14-4a24-9aae-357f1f06e874-kube-api-access-7tkls\") pod \"console-7b8649454d-tq4z4\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.475630 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/41ef7a5e-ff14-4a24-9aae-357f1f06e874-console-oauth-config\") pod \"console-7b8649454d-tq4z4\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.475681 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-service-ca\") pod \"console-7b8649454d-tq4z4\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.476491 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-console-config\") pod \"console-7b8649454d-tq4z4\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.476841 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-oauth-serving-cert\") pod \"console-7b8649454d-tq4z4\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.477686 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-service-ca\") pod \"console-7b8649454d-tq4z4\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.478689 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-trusted-ca-bundle\") pod \"console-7b8649454d-tq4z4\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.482807 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/41ef7a5e-ff14-4a24-9aae-357f1f06e874-console-oauth-config\") pod \"console-7b8649454d-tq4z4\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.482882 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/41ef7a5e-ff14-4a24-9aae-357f1f06e874-console-serving-cert\") pod \"console-7b8649454d-tq4z4\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.495956 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tkls\" (UniqueName: \"kubernetes.io/projected/41ef7a5e-ff14-4a24-9aae-357f1f06e874-kube-api-access-7tkls\") pod \"console-7b8649454d-tq4z4\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.601979 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:00 crc kubenswrapper[4693]: I1212 15:57:00.990152 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b8649454d-tq4z4"] Dec 12 15:57:00 crc kubenswrapper[4693]: W1212 15:57:00.998288 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41ef7a5e_ff14_4a24_9aae_357f1f06e874.slice/crio-5a1b0cb3bc5eb1412791fe1e103afaaa64d0928e345b3b1f8d886b8652172bf1 WatchSource:0}: Error finding container 5a1b0cb3bc5eb1412791fe1e103afaaa64d0928e345b3b1f8d886b8652172bf1: Status 404 returned error can't find the container with id 5a1b0cb3bc5eb1412791fe1e103afaaa64d0928e345b3b1f8d886b8652172bf1 Dec 12 15:57:01 crc kubenswrapper[4693]: I1212 15:57:01.789155 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b8649454d-tq4z4" event={"ID":"41ef7a5e-ff14-4a24-9aae-357f1f06e874","Type":"ContainerStarted","Data":"73912ce5f6930e39ea46f4ebad8eca56eefd55a44d530d0eb7ed8b5623b1dae2"} Dec 12 15:57:01 crc kubenswrapper[4693]: I1212 15:57:01.789573 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b8649454d-tq4z4" event={"ID":"41ef7a5e-ff14-4a24-9aae-357f1f06e874","Type":"ContainerStarted","Data":"5a1b0cb3bc5eb1412791fe1e103afaaa64d0928e345b3b1f8d886b8652172bf1"} Dec 12 15:57:01 crc kubenswrapper[4693]: I1212 15:57:01.809207 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7b8649454d-tq4z4" podStartSLOduration=1.8091873029999999 podStartE2EDuration="1.809187303s" podCreationTimestamp="2025-12-12 15:57:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 15:57:01.805115273 +0000 UTC m=+648.973754894" watchObservedRunningTime="2025-12-12 15:57:01.809187303 +0000 UTC m=+648.977826924" Dec 12 15:57:10 crc kubenswrapper[4693]: I1212 15:57:10.602757 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:10 crc kubenswrapper[4693]: I1212 15:57:10.603821 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:10 crc kubenswrapper[4693]: I1212 15:57:10.608310 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:10 crc kubenswrapper[4693]: I1212 15:57:10.873721 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 15:57:10 crc kubenswrapper[4693]: I1212 15:57:10.931226 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5749ff4bc8-hnj69"] Dec 12 15:57:35 crc kubenswrapper[4693]: I1212 15:57:35.983624 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5749ff4bc8-hnj69" podUID="1389d6fe-36fc-4b22-b9f3-d1f12e2fac85" containerName="console" containerID="cri-o://ec4b0a34f5409f41959bb7722d795421a378f9a3bbecb2d221f10cd0380ddb82" gracePeriod=15 Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.315127 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5749ff4bc8-hnj69_1389d6fe-36fc-4b22-b9f3-d1f12e2fac85/console/0.log" Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.315439 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.404163 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-trusted-ca-bundle\") pod \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.404252 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-console-serving-cert\") pod \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.404368 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-console-config\") pod \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.404408 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-service-ca\") pod \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.404518 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwzfb\" (UniqueName: \"kubernetes.io/projected/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-kube-api-access-pwzfb\") pod \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.404614 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-oauth-serving-cert\") pod \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.404810 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-console-oauth-config\") pod \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\" (UID: \"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85\") " Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.405342 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-service-ca" (OuterVolumeSpecName: "service-ca") pod "1389d6fe-36fc-4b22-b9f3-d1f12e2fac85" (UID: "1389d6fe-36fc-4b22-b9f3-d1f12e2fac85"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.405446 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-console-config" (OuterVolumeSpecName: "console-config") pod "1389d6fe-36fc-4b22-b9f3-d1f12e2fac85" (UID: "1389d6fe-36fc-4b22-b9f3-d1f12e2fac85"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.405458 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1389d6fe-36fc-4b22-b9f3-d1f12e2fac85" (UID: "1389d6fe-36fc-4b22-b9f3-d1f12e2fac85"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.405590 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "1389d6fe-36fc-4b22-b9f3-d1f12e2fac85" (UID: "1389d6fe-36fc-4b22-b9f3-d1f12e2fac85"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.405859 4693 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-console-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.405911 4693 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.405937 4693 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.405962 4693 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.410463 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "1389d6fe-36fc-4b22-b9f3-d1f12e2fac85" (UID: "1389d6fe-36fc-4b22-b9f3-d1f12e2fac85"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.410481 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-kube-api-access-pwzfb" (OuterVolumeSpecName: "kube-api-access-pwzfb") pod "1389d6fe-36fc-4b22-b9f3-d1f12e2fac85" (UID: "1389d6fe-36fc-4b22-b9f3-d1f12e2fac85"). InnerVolumeSpecName "kube-api-access-pwzfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.410533 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "1389d6fe-36fc-4b22-b9f3-d1f12e2fac85" (UID: "1389d6fe-36fc-4b22-b9f3-d1f12e2fac85"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.507403 4693 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.507468 4693 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 15:57:36 crc kubenswrapper[4693]: I1212 15:57:36.507494 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwzfb\" (UniqueName: \"kubernetes.io/projected/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85-kube-api-access-pwzfb\") on node \"crc\" DevicePath \"\"" Dec 12 15:57:37 crc kubenswrapper[4693]: I1212 15:57:37.045151 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5749ff4bc8-hnj69_1389d6fe-36fc-4b22-b9f3-d1f12e2fac85/console/0.log" Dec 12 15:57:37 crc kubenswrapper[4693]: I1212 15:57:37.045201 4693 generic.go:334] "Generic (PLEG): container finished" podID="1389d6fe-36fc-4b22-b9f3-d1f12e2fac85" containerID="ec4b0a34f5409f41959bb7722d795421a378f9a3bbecb2d221f10cd0380ddb82" exitCode=2 Dec 12 15:57:37 crc kubenswrapper[4693]: I1212 15:57:37.045229 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5749ff4bc8-hnj69" event={"ID":"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85","Type":"ContainerDied","Data":"ec4b0a34f5409f41959bb7722d795421a378f9a3bbecb2d221f10cd0380ddb82"} Dec 12 15:57:37 crc kubenswrapper[4693]: I1212 15:57:37.045273 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5749ff4bc8-hnj69" event={"ID":"1389d6fe-36fc-4b22-b9f3-d1f12e2fac85","Type":"ContainerDied","Data":"c2c586c4dff1cec489740e3279ce89da0c213673ec03fdf66d98f2685be96076"} Dec 12 15:57:37 crc kubenswrapper[4693]: I1212 15:57:37.045319 4693 scope.go:117] "RemoveContainer" containerID="ec4b0a34f5409f41959bb7722d795421a378f9a3bbecb2d221f10cd0380ddb82" Dec 12 15:57:37 crc kubenswrapper[4693]: I1212 15:57:37.045371 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5749ff4bc8-hnj69" Dec 12 15:57:37 crc kubenswrapper[4693]: I1212 15:57:37.060635 4693 scope.go:117] "RemoveContainer" containerID="ec4b0a34f5409f41959bb7722d795421a378f9a3bbecb2d221f10cd0380ddb82" Dec 12 15:57:37 crc kubenswrapper[4693]: E1212 15:57:37.061144 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec4b0a34f5409f41959bb7722d795421a378f9a3bbecb2d221f10cd0380ddb82\": container with ID starting with ec4b0a34f5409f41959bb7722d795421a378f9a3bbecb2d221f10cd0380ddb82 not found: ID does not exist" containerID="ec4b0a34f5409f41959bb7722d795421a378f9a3bbecb2d221f10cd0380ddb82" Dec 12 15:57:37 crc kubenswrapper[4693]: I1212 15:57:37.061183 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec4b0a34f5409f41959bb7722d795421a378f9a3bbecb2d221f10cd0380ddb82"} err="failed to get container status \"ec4b0a34f5409f41959bb7722d795421a378f9a3bbecb2d221f10cd0380ddb82\": rpc error: code = NotFound desc = could not find container \"ec4b0a34f5409f41959bb7722d795421a378f9a3bbecb2d221f10cd0380ddb82\": container with ID starting with ec4b0a34f5409f41959bb7722d795421a378f9a3bbecb2d221f10cd0380ddb82 not found: ID does not exist" Dec 12 15:57:37 crc kubenswrapper[4693]: I1212 15:57:37.082313 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5749ff4bc8-hnj69"] Dec 12 15:57:37 crc kubenswrapper[4693]: I1212 15:57:37.085858 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5749ff4bc8-hnj69"] Dec 12 15:57:37 crc kubenswrapper[4693]: I1212 15:57:37.366423 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1389d6fe-36fc-4b22-b9f3-d1f12e2fac85" path="/var/lib/kubelet/pods/1389d6fe-36fc-4b22-b9f3-d1f12e2fac85/volumes" Dec 12 15:58:42 crc kubenswrapper[4693]: I1212 15:58:42.530859 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:58:42 crc kubenswrapper[4693]: I1212 15:58:42.531414 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:59:00 crc kubenswrapper[4693]: I1212 15:59:00.618384 4693 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 15:59:12 crc kubenswrapper[4693]: I1212 15:59:12.531026 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:59:12 crc kubenswrapper[4693]: I1212 15:59:12.531826 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:59:42 crc kubenswrapper[4693]: I1212 15:59:42.530227 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 15:59:42 crc kubenswrapper[4693]: I1212 15:59:42.530891 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 15:59:42 crc kubenswrapper[4693]: I1212 15:59:42.530938 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 15:59:42 crc kubenswrapper[4693]: I1212 15:59:42.531556 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"74051a73f37429f331f62089999985291d71febca1fcfa63c6624e65d5235174"} pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 15:59:42 crc kubenswrapper[4693]: I1212 15:59:42.531618 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" containerID="cri-o://74051a73f37429f331f62089999985291d71febca1fcfa63c6624e65d5235174" gracePeriod=600 Dec 12 15:59:43 crc kubenswrapper[4693]: I1212 15:59:43.864823 4693 generic.go:334] "Generic (PLEG): container finished" podID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerID="74051a73f37429f331f62089999985291d71febca1fcfa63c6624e65d5235174" exitCode=0 Dec 12 15:59:43 crc kubenswrapper[4693]: I1212 15:59:43.864972 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerDied","Data":"74051a73f37429f331f62089999985291d71febca1fcfa63c6624e65d5235174"} Dec 12 15:59:43 crc kubenswrapper[4693]: I1212 15:59:43.865201 4693 scope.go:117] "RemoveContainer" containerID="13e55d11ade86a76f1b5f387056a50a27a64fcc2f93e4354f4e32727ed6ed0c7" Dec 12 15:59:44 crc kubenswrapper[4693]: I1212 15:59:44.875043 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerStarted","Data":"6f8076bbaf9c92a7134e9ae28b9eeeb9f0776e367f05e17a692efcc2523d8648"} Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.220260 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t"] Dec 12 16:00:00 crc kubenswrapper[4693]: E1212 16:00:00.221024 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1389d6fe-36fc-4b22-b9f3-d1f12e2fac85" containerName="console" Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.221043 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="1389d6fe-36fc-4b22-b9f3-d1f12e2fac85" containerName="console" Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.221161 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="1389d6fe-36fc-4b22-b9f3-d1f12e2fac85" containerName="console" Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.221602 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t" Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.224262 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.232564 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.244792 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t"] Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.338471 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50cd7719-da3b-43a9-980b-60c1709e862e-config-volume\") pod \"collect-profiles-29425920-vdz7t\" (UID: \"50cd7719-da3b-43a9-980b-60c1709e862e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t" Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.338540 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50cd7719-da3b-43a9-980b-60c1709e862e-secret-volume\") pod \"collect-profiles-29425920-vdz7t\" (UID: \"50cd7719-da3b-43a9-980b-60c1709e862e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t" Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.338604 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt4wj\" (UniqueName: \"kubernetes.io/projected/50cd7719-da3b-43a9-980b-60c1709e862e-kube-api-access-jt4wj\") pod \"collect-profiles-29425920-vdz7t\" (UID: \"50cd7719-da3b-43a9-980b-60c1709e862e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t" Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.440553 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt4wj\" (UniqueName: \"kubernetes.io/projected/50cd7719-da3b-43a9-980b-60c1709e862e-kube-api-access-jt4wj\") pod \"collect-profiles-29425920-vdz7t\" (UID: \"50cd7719-da3b-43a9-980b-60c1709e862e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t" Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.440693 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50cd7719-da3b-43a9-980b-60c1709e862e-config-volume\") pod \"collect-profiles-29425920-vdz7t\" (UID: \"50cd7719-da3b-43a9-980b-60c1709e862e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t" Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.440727 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50cd7719-da3b-43a9-980b-60c1709e862e-secret-volume\") pod \"collect-profiles-29425920-vdz7t\" (UID: \"50cd7719-da3b-43a9-980b-60c1709e862e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t" Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.442176 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50cd7719-da3b-43a9-980b-60c1709e862e-config-volume\") pod \"collect-profiles-29425920-vdz7t\" (UID: \"50cd7719-da3b-43a9-980b-60c1709e862e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t" Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.448124 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50cd7719-da3b-43a9-980b-60c1709e862e-secret-volume\") pod \"collect-profiles-29425920-vdz7t\" (UID: \"50cd7719-da3b-43a9-980b-60c1709e862e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t" Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.462004 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt4wj\" (UniqueName: \"kubernetes.io/projected/50cd7719-da3b-43a9-980b-60c1709e862e-kube-api-access-jt4wj\") pod \"collect-profiles-29425920-vdz7t\" (UID: \"50cd7719-da3b-43a9-980b-60c1709e862e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t" Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.544770 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t" Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.787497 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t"] Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.865379 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x"] Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.867196 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x" Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.872139 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.880088 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x"] Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.970537 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t" event={"ID":"50cd7719-da3b-43a9-980b-60c1709e862e","Type":"ContainerStarted","Data":"ce3b52dd13275eea79a053e69d6d57564272aa4153888241fd3705c9db372eac"} Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.970591 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t" event={"ID":"50cd7719-da3b-43a9-980b-60c1709e862e","Type":"ContainerStarted","Data":"3ae452afc057a6402300e6377afcf040b3f95ddca2510040093cb9705a3debf5"} Dec 12 16:00:00 crc kubenswrapper[4693]: I1212 16:00:00.987710 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t" podStartSLOduration=0.987688309 podStartE2EDuration="987.688309ms" podCreationTimestamp="2025-12-12 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:00:00.985243042 +0000 UTC m=+828.153882643" watchObservedRunningTime="2025-12-12 16:00:00.987688309 +0000 UTC m=+828.156327910" Dec 12 16:00:01 crc kubenswrapper[4693]: I1212 16:00:01.052609 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbwjg\" (UniqueName: \"kubernetes.io/projected/40def537-a8e1-4598-bcbd-c5b77b352fda-kube-api-access-xbwjg\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x\" (UID: \"40def537-a8e1-4598-bcbd-c5b77b352fda\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x" Dec 12 16:00:01 crc kubenswrapper[4693]: I1212 16:00:01.052730 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40def537-a8e1-4598-bcbd-c5b77b352fda-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x\" (UID: \"40def537-a8e1-4598-bcbd-c5b77b352fda\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x" Dec 12 16:00:01 crc kubenswrapper[4693]: I1212 16:00:01.052756 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40def537-a8e1-4598-bcbd-c5b77b352fda-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x\" (UID: \"40def537-a8e1-4598-bcbd-c5b77b352fda\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x" Dec 12 16:00:01 crc kubenswrapper[4693]: I1212 16:00:01.160908 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40def537-a8e1-4598-bcbd-c5b77b352fda-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x\" (UID: \"40def537-a8e1-4598-bcbd-c5b77b352fda\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x" Dec 12 16:00:01 crc kubenswrapper[4693]: I1212 16:00:01.160999 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40def537-a8e1-4598-bcbd-c5b77b352fda-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x\" (UID: \"40def537-a8e1-4598-bcbd-c5b77b352fda\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x" Dec 12 16:00:01 crc kubenswrapper[4693]: I1212 16:00:01.161071 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbwjg\" (UniqueName: \"kubernetes.io/projected/40def537-a8e1-4598-bcbd-c5b77b352fda-kube-api-access-xbwjg\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x\" (UID: \"40def537-a8e1-4598-bcbd-c5b77b352fda\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x" Dec 12 16:00:01 crc kubenswrapper[4693]: I1212 16:00:01.162580 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40def537-a8e1-4598-bcbd-c5b77b352fda-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x\" (UID: \"40def537-a8e1-4598-bcbd-c5b77b352fda\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x" Dec 12 16:00:01 crc kubenswrapper[4693]: I1212 16:00:01.162663 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40def537-a8e1-4598-bcbd-c5b77b352fda-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x\" (UID: \"40def537-a8e1-4598-bcbd-c5b77b352fda\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x" Dec 12 16:00:01 crc kubenswrapper[4693]: I1212 16:00:01.182833 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbwjg\" (UniqueName: \"kubernetes.io/projected/40def537-a8e1-4598-bcbd-c5b77b352fda-kube-api-access-xbwjg\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x\" (UID: \"40def537-a8e1-4598-bcbd-c5b77b352fda\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x" Dec 12 16:00:01 crc kubenswrapper[4693]: I1212 16:00:01.221619 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x" Dec 12 16:00:01 crc kubenswrapper[4693]: E1212 16:00:01.250524 4693 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50cd7719_da3b_43a9_980b_60c1709e862e.slice/crio-conmon-ce3b52dd13275eea79a053e69d6d57564272aa4153888241fd3705c9db372eac.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50cd7719_da3b_43a9_980b_60c1709e862e.slice/crio-ce3b52dd13275eea79a053e69d6d57564272aa4153888241fd3705c9db372eac.scope\": RecentStats: unable to find data in memory cache]" Dec 12 16:00:01 crc kubenswrapper[4693]: I1212 16:00:01.439860 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x"] Dec 12 16:00:01 crc kubenswrapper[4693]: I1212 16:00:01.977764 4693 generic.go:334] "Generic (PLEG): container finished" podID="50cd7719-da3b-43a9-980b-60c1709e862e" containerID="ce3b52dd13275eea79a053e69d6d57564272aa4153888241fd3705c9db372eac" exitCode=0 Dec 12 16:00:01 crc kubenswrapper[4693]: I1212 16:00:01.977854 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t" event={"ID":"50cd7719-da3b-43a9-980b-60c1709e862e","Type":"ContainerDied","Data":"ce3b52dd13275eea79a053e69d6d57564272aa4153888241fd3705c9db372eac"} Dec 12 16:00:01 crc kubenswrapper[4693]: I1212 16:00:01.979911 4693 generic.go:334] "Generic (PLEG): container finished" podID="40def537-a8e1-4598-bcbd-c5b77b352fda" containerID="ab625c5a9031808124c07f97c4280261eeee2e5883a8d3af605b391ff9ec30d1" exitCode=0 Dec 12 16:00:01 crc kubenswrapper[4693]: I1212 16:00:01.979982 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x" event={"ID":"40def537-a8e1-4598-bcbd-c5b77b352fda","Type":"ContainerDied","Data":"ab625c5a9031808124c07f97c4280261eeee2e5883a8d3af605b391ff9ec30d1"} Dec 12 16:00:01 crc kubenswrapper[4693]: I1212 16:00:01.980047 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x" event={"ID":"40def537-a8e1-4598-bcbd-c5b77b352fda","Type":"ContainerStarted","Data":"19c088ab9d7f17ee3d43a7225881a2d08cb875c8dc8673f326ba41674effbd2a"} Dec 12 16:00:01 crc kubenswrapper[4693]: I1212 16:00:01.981774 4693 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.126745 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xs4tf"] Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.130781 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xs4tf" Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.138556 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xs4tf"] Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.229547 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t" Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.295446 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzhtj\" (UniqueName: \"kubernetes.io/projected/95322c55-e1a1-41d4-b362-bfa4200629f5-kube-api-access-gzhtj\") pod \"redhat-operators-xs4tf\" (UID: \"95322c55-e1a1-41d4-b362-bfa4200629f5\") " pod="openshift-marketplace/redhat-operators-xs4tf" Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.295684 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95322c55-e1a1-41d4-b362-bfa4200629f5-catalog-content\") pod \"redhat-operators-xs4tf\" (UID: \"95322c55-e1a1-41d4-b362-bfa4200629f5\") " pod="openshift-marketplace/redhat-operators-xs4tf" Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.295805 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95322c55-e1a1-41d4-b362-bfa4200629f5-utilities\") pod \"redhat-operators-xs4tf\" (UID: \"95322c55-e1a1-41d4-b362-bfa4200629f5\") " pod="openshift-marketplace/redhat-operators-xs4tf" Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.396941 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50cd7719-da3b-43a9-980b-60c1709e862e-config-volume\") pod \"50cd7719-da3b-43a9-980b-60c1709e862e\" (UID: \"50cd7719-da3b-43a9-980b-60c1709e862e\") " Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.397037 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt4wj\" (UniqueName: \"kubernetes.io/projected/50cd7719-da3b-43a9-980b-60c1709e862e-kube-api-access-jt4wj\") pod \"50cd7719-da3b-43a9-980b-60c1709e862e\" (UID: \"50cd7719-da3b-43a9-980b-60c1709e862e\") " Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.397115 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50cd7719-da3b-43a9-980b-60c1709e862e-secret-volume\") pod \"50cd7719-da3b-43a9-980b-60c1709e862e\" (UID: \"50cd7719-da3b-43a9-980b-60c1709e862e\") " Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.397460 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95322c55-e1a1-41d4-b362-bfa4200629f5-catalog-content\") pod \"redhat-operators-xs4tf\" (UID: \"95322c55-e1a1-41d4-b362-bfa4200629f5\") " pod="openshift-marketplace/redhat-operators-xs4tf" Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.397525 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95322c55-e1a1-41d4-b362-bfa4200629f5-utilities\") pod \"redhat-operators-xs4tf\" (UID: \"95322c55-e1a1-41d4-b362-bfa4200629f5\") " pod="openshift-marketplace/redhat-operators-xs4tf" Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.397559 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzhtj\" (UniqueName: \"kubernetes.io/projected/95322c55-e1a1-41d4-b362-bfa4200629f5-kube-api-access-gzhtj\") pod \"redhat-operators-xs4tf\" (UID: \"95322c55-e1a1-41d4-b362-bfa4200629f5\") " pod="openshift-marketplace/redhat-operators-xs4tf" Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.397626 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50cd7719-da3b-43a9-980b-60c1709e862e-config-volume" (OuterVolumeSpecName: "config-volume") pod "50cd7719-da3b-43a9-980b-60c1709e862e" (UID: "50cd7719-da3b-43a9-980b-60c1709e862e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.398044 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95322c55-e1a1-41d4-b362-bfa4200629f5-catalog-content\") pod \"redhat-operators-xs4tf\" (UID: \"95322c55-e1a1-41d4-b362-bfa4200629f5\") " pod="openshift-marketplace/redhat-operators-xs4tf" Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.398452 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95322c55-e1a1-41d4-b362-bfa4200629f5-utilities\") pod \"redhat-operators-xs4tf\" (UID: \"95322c55-e1a1-41d4-b362-bfa4200629f5\") " pod="openshift-marketplace/redhat-operators-xs4tf" Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.402402 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50cd7719-da3b-43a9-980b-60c1709e862e-kube-api-access-jt4wj" (OuterVolumeSpecName: "kube-api-access-jt4wj") pod "50cd7719-da3b-43a9-980b-60c1709e862e" (UID: "50cd7719-da3b-43a9-980b-60c1709e862e"). InnerVolumeSpecName "kube-api-access-jt4wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.402730 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50cd7719-da3b-43a9-980b-60c1709e862e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "50cd7719-da3b-43a9-980b-60c1709e862e" (UID: "50cd7719-da3b-43a9-980b-60c1709e862e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.416128 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzhtj\" (UniqueName: \"kubernetes.io/projected/95322c55-e1a1-41d4-b362-bfa4200629f5-kube-api-access-gzhtj\") pod \"redhat-operators-xs4tf\" (UID: \"95322c55-e1a1-41d4-b362-bfa4200629f5\") " pod="openshift-marketplace/redhat-operators-xs4tf" Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.456006 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xs4tf" Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.499604 4693 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50cd7719-da3b-43a9-980b-60c1709e862e-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.499638 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt4wj\" (UniqueName: \"kubernetes.io/projected/50cd7719-da3b-43a9-980b-60c1709e862e-kube-api-access-jt4wj\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.499654 4693 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50cd7719-da3b-43a9-980b-60c1709e862e-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.662380 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xs4tf"] Dec 12 16:00:03 crc kubenswrapper[4693]: W1212 16:00:03.665978 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95322c55_e1a1_41d4_b362_bfa4200629f5.slice/crio-2affd4bd3b73321fb6ec37401e52a24e75ed6c5a9222d4d5c5fe67964d6ec29b WatchSource:0}: Error finding container 2affd4bd3b73321fb6ec37401e52a24e75ed6c5a9222d4d5c5fe67964d6ec29b: Status 404 returned error can't find the container with id 2affd4bd3b73321fb6ec37401e52a24e75ed6c5a9222d4d5c5fe67964d6ec29b Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.992505 4693 generic.go:334] "Generic (PLEG): container finished" podID="95322c55-e1a1-41d4-b362-bfa4200629f5" containerID="ee212fdca6e7b1f49aac97ed8579563c0b58e041cb76fb801cb59d6a6e129697" exitCode=0 Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.992579 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xs4tf" event={"ID":"95322c55-e1a1-41d4-b362-bfa4200629f5","Type":"ContainerDied","Data":"ee212fdca6e7b1f49aac97ed8579563c0b58e041cb76fb801cb59d6a6e129697"} Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.992613 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xs4tf" event={"ID":"95322c55-e1a1-41d4-b362-bfa4200629f5","Type":"ContainerStarted","Data":"2affd4bd3b73321fb6ec37401e52a24e75ed6c5a9222d4d5c5fe67964d6ec29b"} Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.995549 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t" event={"ID":"50cd7719-da3b-43a9-980b-60c1709e862e","Type":"ContainerDied","Data":"3ae452afc057a6402300e6377afcf040b3f95ddca2510040093cb9705a3debf5"} Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.995593 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ae452afc057a6402300e6377afcf040b3f95ddca2510040093cb9705a3debf5" Dec 12 16:00:03 crc kubenswrapper[4693]: I1212 16:00:03.995668 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t" Dec 12 16:00:05 crc kubenswrapper[4693]: I1212 16:00:05.007975 4693 generic.go:334] "Generic (PLEG): container finished" podID="40def537-a8e1-4598-bcbd-c5b77b352fda" containerID="f954e0716eb70c8d31d35ee815ab97448f55094db55a2b6101025292eeb6f326" exitCode=0 Dec 12 16:00:05 crc kubenswrapper[4693]: I1212 16:00:05.008500 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x" event={"ID":"40def537-a8e1-4598-bcbd-c5b77b352fda","Type":"ContainerDied","Data":"f954e0716eb70c8d31d35ee815ab97448f55094db55a2b6101025292eeb6f326"} Dec 12 16:00:06 crc kubenswrapper[4693]: I1212 16:00:06.018338 4693 generic.go:334] "Generic (PLEG): container finished" podID="40def537-a8e1-4598-bcbd-c5b77b352fda" containerID="062dd899ea52d314ba9cf6cd037183990a2caeeafc2e4171159e120283d509ab" exitCode=0 Dec 12 16:00:06 crc kubenswrapper[4693]: I1212 16:00:06.018446 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x" event={"ID":"40def537-a8e1-4598-bcbd-c5b77b352fda","Type":"ContainerDied","Data":"062dd899ea52d314ba9cf6cd037183990a2caeeafc2e4171159e120283d509ab"} Dec 12 16:00:06 crc kubenswrapper[4693]: I1212 16:00:06.022179 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xs4tf" event={"ID":"95322c55-e1a1-41d4-b362-bfa4200629f5","Type":"ContainerStarted","Data":"4d621771d9b3453938ad4c71f7bc9e39c6c54f75e37644365d2d60dc8d6256ec"} Dec 12 16:00:07 crc kubenswrapper[4693]: I1212 16:00:07.543264 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x" Dec 12 16:00:07 crc kubenswrapper[4693]: I1212 16:00:07.609938 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbwjg\" (UniqueName: \"kubernetes.io/projected/40def537-a8e1-4598-bcbd-c5b77b352fda-kube-api-access-xbwjg\") pod \"40def537-a8e1-4598-bcbd-c5b77b352fda\" (UID: \"40def537-a8e1-4598-bcbd-c5b77b352fda\") " Dec 12 16:00:07 crc kubenswrapper[4693]: I1212 16:00:07.610023 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40def537-a8e1-4598-bcbd-c5b77b352fda-bundle\") pod \"40def537-a8e1-4598-bcbd-c5b77b352fda\" (UID: \"40def537-a8e1-4598-bcbd-c5b77b352fda\") " Dec 12 16:00:07 crc kubenswrapper[4693]: I1212 16:00:07.610095 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40def537-a8e1-4598-bcbd-c5b77b352fda-util\") pod \"40def537-a8e1-4598-bcbd-c5b77b352fda\" (UID: \"40def537-a8e1-4598-bcbd-c5b77b352fda\") " Dec 12 16:00:07 crc kubenswrapper[4693]: I1212 16:00:07.611936 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40def537-a8e1-4598-bcbd-c5b77b352fda-bundle" (OuterVolumeSpecName: "bundle") pod "40def537-a8e1-4598-bcbd-c5b77b352fda" (UID: "40def537-a8e1-4598-bcbd-c5b77b352fda"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:00:07 crc kubenswrapper[4693]: I1212 16:00:07.615761 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40def537-a8e1-4598-bcbd-c5b77b352fda-kube-api-access-xbwjg" (OuterVolumeSpecName: "kube-api-access-xbwjg") pod "40def537-a8e1-4598-bcbd-c5b77b352fda" (UID: "40def537-a8e1-4598-bcbd-c5b77b352fda"). InnerVolumeSpecName "kube-api-access-xbwjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:00:07 crc kubenswrapper[4693]: I1212 16:00:07.619872 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40def537-a8e1-4598-bcbd-c5b77b352fda-util" (OuterVolumeSpecName: "util") pod "40def537-a8e1-4598-bcbd-c5b77b352fda" (UID: "40def537-a8e1-4598-bcbd-c5b77b352fda"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:00:07 crc kubenswrapper[4693]: I1212 16:00:07.711277 4693 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40def537-a8e1-4598-bcbd-c5b77b352fda-util\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:07 crc kubenswrapper[4693]: I1212 16:00:07.711381 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbwjg\" (UniqueName: \"kubernetes.io/projected/40def537-a8e1-4598-bcbd-c5b77b352fda-kube-api-access-xbwjg\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:07 crc kubenswrapper[4693]: I1212 16:00:07.711392 4693 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40def537-a8e1-4598-bcbd-c5b77b352fda-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:08 crc kubenswrapper[4693]: I1212 16:00:08.035174 4693 generic.go:334] "Generic (PLEG): container finished" podID="95322c55-e1a1-41d4-b362-bfa4200629f5" containerID="4d621771d9b3453938ad4c71f7bc9e39c6c54f75e37644365d2d60dc8d6256ec" exitCode=0 Dec 12 16:00:08 crc kubenswrapper[4693]: I1212 16:00:08.035243 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xs4tf" event={"ID":"95322c55-e1a1-41d4-b362-bfa4200629f5","Type":"ContainerDied","Data":"4d621771d9b3453938ad4c71f7bc9e39c6c54f75e37644365d2d60dc8d6256ec"} Dec 12 16:00:08 crc kubenswrapper[4693]: I1212 16:00:08.041893 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x" event={"ID":"40def537-a8e1-4598-bcbd-c5b77b352fda","Type":"ContainerDied","Data":"19c088ab9d7f17ee3d43a7225881a2d08cb875c8dc8673f326ba41674effbd2a"} Dec 12 16:00:08 crc kubenswrapper[4693]: I1212 16:00:08.041924 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19c088ab9d7f17ee3d43a7225881a2d08cb875c8dc8673f326ba41674effbd2a" Dec 12 16:00:08 crc kubenswrapper[4693]: I1212 16:00:08.041978 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210k5s9x" Dec 12 16:00:09 crc kubenswrapper[4693]: I1212 16:00:09.050429 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xs4tf" event={"ID":"95322c55-e1a1-41d4-b362-bfa4200629f5","Type":"ContainerStarted","Data":"70d384cc01cca49a3a5d958389bb806ded4685a92a5b7f7682f7e99ee6f05c34"} Dec 12 16:00:10 crc kubenswrapper[4693]: I1212 16:00:10.081081 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xs4tf" podStartSLOduration=2.299707845 podStartE2EDuration="7.081056451s" podCreationTimestamp="2025-12-12 16:00:03 +0000 UTC" firstStartedPulling="2025-12-12 16:00:04.000572647 +0000 UTC m=+831.169212258" lastFinishedPulling="2025-12-12 16:00:08.781921243 +0000 UTC m=+835.950560864" observedRunningTime="2025-12-12 16:00:10.076992021 +0000 UTC m=+837.245631642" watchObservedRunningTime="2025-12-12 16:00:10.081056451 +0000 UTC m=+837.249696062" Dec 12 16:00:11 crc kubenswrapper[4693]: I1212 16:00:11.950032 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ps9gt"] Dec 12 16:00:11 crc kubenswrapper[4693]: I1212 16:00:11.950429 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovn-controller" containerID="cri-o://201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05" gracePeriod=30 Dec 12 16:00:11 crc kubenswrapper[4693]: I1212 16:00:11.950527 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="nbdb" containerID="cri-o://1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a" gracePeriod=30 Dec 12 16:00:11 crc kubenswrapper[4693]: I1212 16:00:11.950571 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="kube-rbac-proxy-node" containerID="cri-o://9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92" gracePeriod=30 Dec 12 16:00:11 crc kubenswrapper[4693]: I1212 16:00:11.950609 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovn-acl-logging" containerID="cri-o://54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48" gracePeriod=30 Dec 12 16:00:11 crc kubenswrapper[4693]: I1212 16:00:11.950548 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="northd" containerID="cri-o://8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c" gracePeriod=30 Dec 12 16:00:11 crc kubenswrapper[4693]: I1212 16:00:11.950669 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="sbdb" containerID="cri-o://f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed" gracePeriod=30 Dec 12 16:00:11 crc kubenswrapper[4693]: I1212 16:00:11.950561 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e" gracePeriod=30 Dec 12 16:00:11 crc kubenswrapper[4693]: I1212 16:00:11.989558 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovnkube-controller" containerID="cri-o://444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25" gracePeriod=30 Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.110468 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sllz5_e54028d7-cdbb-4fa9-92cd-9570edacb888/kube-multus/2.log" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.111525 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sllz5_e54028d7-cdbb-4fa9-92cd-9570edacb888/kube-multus/1.log" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.111577 4693 generic.go:334] "Generic (PLEG): container finished" podID="e54028d7-cdbb-4fa9-92cd-9570edacb888" containerID="20f65b2d3a7013a476343e6940f753f3203dcd391cc6f30cd35076234e281395" exitCode=2 Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.111664 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sllz5" event={"ID":"e54028d7-cdbb-4fa9-92cd-9570edacb888","Type":"ContainerDied","Data":"20f65b2d3a7013a476343e6940f753f3203dcd391cc6f30cd35076234e281395"} Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.111701 4693 scope.go:117] "RemoveContainer" containerID="3dcd0e248c19f95611ffa8d0a665c032dff039d82f9b088c437e486136574fce" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.112178 4693 scope.go:117] "RemoveContainer" containerID="20f65b2d3a7013a476343e6940f753f3203dcd391cc6f30cd35076234e281395" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.136912 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ps9gt_fa7eae7d-b662-434d-96c1-de3080d579bd/ovnkube-controller/3.log" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.155748 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ps9gt_fa7eae7d-b662-434d-96c1-de3080d579bd/ovn-acl-logging/0.log" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.156237 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ps9gt_fa7eae7d-b662-434d-96c1-de3080d579bd/ovn-controller/0.log" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.157421 4693 generic.go:334] "Generic (PLEG): container finished" podID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerID="444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25" exitCode=0 Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.157442 4693 generic.go:334] "Generic (PLEG): container finished" podID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerID="1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a" exitCode=0 Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.157450 4693 generic.go:334] "Generic (PLEG): container finished" podID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerID="ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e" exitCode=0 Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.157457 4693 generic.go:334] "Generic (PLEG): container finished" podID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerID="54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48" exitCode=143 Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.157465 4693 generic.go:334] "Generic (PLEG): container finished" podID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerID="201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05" exitCode=143 Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.157484 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerDied","Data":"444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25"} Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.157510 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerDied","Data":"1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a"} Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.157519 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerDied","Data":"ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e"} Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.157528 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerDied","Data":"54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48"} Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.157536 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerDied","Data":"201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05"} Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.195421 4693 scope.go:117] "RemoveContainer" containerID="15049e5d253208466f13edd4c70b412f962d59285671ce1b0e0d86b8088e7147" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.306224 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ps9gt_fa7eae7d-b662-434d-96c1-de3080d579bd/ovn-acl-logging/0.log" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.309964 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ps9gt_fa7eae7d-b662-434d-96c1-de3080d579bd/ovn-controller/0.log" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.311418 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.456531 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xs4tf" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.456581 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xs4tf" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.485875 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c6f2k"] Dec 12 16:00:13 crc kubenswrapper[4693]: E1212 16:00:13.498842 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovnkube-controller" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.498928 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovnkube-controller" Dec 12 16:00:13 crc kubenswrapper[4693]: E1212 16:00:13.499002 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="kube-rbac-proxy-node" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.499101 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="kube-rbac-proxy-node" Dec 12 16:00:13 crc kubenswrapper[4693]: E1212 16:00:13.499171 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovnkube-controller" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.499232 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovnkube-controller" Dec 12 16:00:13 crc kubenswrapper[4693]: E1212 16:00:13.499316 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovn-controller" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.499377 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovn-controller" Dec 12 16:00:13 crc kubenswrapper[4693]: E1212 16:00:13.499443 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.499507 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 16:00:13 crc kubenswrapper[4693]: E1212 16:00:13.499582 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40def537-a8e1-4598-bcbd-c5b77b352fda" containerName="extract" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.499650 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="40def537-a8e1-4598-bcbd-c5b77b352fda" containerName="extract" Dec 12 16:00:13 crc kubenswrapper[4693]: E1212 16:00:13.499723 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="kubecfg-setup" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.499816 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="kubecfg-setup" Dec 12 16:00:13 crc kubenswrapper[4693]: E1212 16:00:13.499895 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="nbdb" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.499968 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="nbdb" Dec 12 16:00:13 crc kubenswrapper[4693]: E1212 16:00:13.500043 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="northd" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.500115 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="northd" Dec 12 16:00:13 crc kubenswrapper[4693]: E1212 16:00:13.500190 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="sbdb" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.500258 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="sbdb" Dec 12 16:00:13 crc kubenswrapper[4693]: E1212 16:00:13.500357 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovnkube-controller" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.500423 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovnkube-controller" Dec 12 16:00:13 crc kubenswrapper[4693]: E1212 16:00:13.500713 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40def537-a8e1-4598-bcbd-c5b77b352fda" containerName="util" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.500815 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="40def537-a8e1-4598-bcbd-c5b77b352fda" containerName="util" Dec 12 16:00:13 crc kubenswrapper[4693]: E1212 16:00:13.500892 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovn-acl-logging" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.500961 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovn-acl-logging" Dec 12 16:00:13 crc kubenswrapper[4693]: E1212 16:00:13.501035 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50cd7719-da3b-43a9-980b-60c1709e862e" containerName="collect-profiles" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.501108 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="50cd7719-da3b-43a9-980b-60c1709e862e" containerName="collect-profiles" Dec 12 16:00:13 crc kubenswrapper[4693]: E1212 16:00:13.501176 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40def537-a8e1-4598-bcbd-c5b77b352fda" containerName="pull" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.501361 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="40def537-a8e1-4598-bcbd-c5b77b352fda" containerName="pull" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.501630 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovnkube-controller" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.501708 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovn-controller" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.501780 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="40def537-a8e1-4598-bcbd-c5b77b352fda" containerName="extract" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.501874 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="nbdb" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.501940 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="northd" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.502004 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovnkube-controller" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.502066 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="sbdb" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.502127 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovn-acl-logging" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.502186 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="kube-rbac-proxy-node" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.502242 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovnkube-controller" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.502341 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="50cd7719-da3b-43a9-980b-60c1709e862e" containerName="collect-profiles" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.502413 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovnkube-controller" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.502494 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 16:00:13 crc kubenswrapper[4693]: E1212 16:00:13.502703 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovnkube-controller" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.502781 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovnkube-controller" Dec 12 16:00:13 crc kubenswrapper[4693]: E1212 16:00:13.502858 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovnkube-controller" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.502924 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovnkube-controller" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.503119 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerName="ovnkube-controller" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.491347 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "fa7eae7d-b662-434d-96c1-de3080d579bd" (UID: "fa7eae7d-b662-434d-96c1-de3080d579bd"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.491287 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-run-ovn-kubernetes\") pod \"fa7eae7d-b662-434d-96c1-de3080d579bd\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.506188 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-cni-netd\") pod \"fa7eae7d-b662-434d-96c1-de3080d579bd\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.506223 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-run-openvswitch\") pod \"fa7eae7d-b662-434d-96c1-de3080d579bd\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.506251 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-slash\") pod \"fa7eae7d-b662-434d-96c1-de3080d579bd\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.506296 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-etc-openvswitch\") pod \"fa7eae7d-b662-434d-96c1-de3080d579bd\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.506328 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwpht\" (UniqueName: \"kubernetes.io/projected/fa7eae7d-b662-434d-96c1-de3080d579bd-kube-api-access-dwpht\") pod \"fa7eae7d-b662-434d-96c1-de3080d579bd\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.506348 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-log-socket\") pod \"fa7eae7d-b662-434d-96c1-de3080d579bd\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.506379 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-kubelet\") pod \"fa7eae7d-b662-434d-96c1-de3080d579bd\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.506418 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fa7eae7d-b662-434d-96c1-de3080d579bd-env-overrides\") pod \"fa7eae7d-b662-434d-96c1-de3080d579bd\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.506450 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-run-netns\") pod \"fa7eae7d-b662-434d-96c1-de3080d579bd\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.506476 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"fa7eae7d-b662-434d-96c1-de3080d579bd\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.506507 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fa7eae7d-b662-434d-96c1-de3080d579bd-ovnkube-script-lib\") pod \"fa7eae7d-b662-434d-96c1-de3080d579bd\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.506552 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-var-lib-openvswitch\") pod \"fa7eae7d-b662-434d-96c1-de3080d579bd\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.506573 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-node-log\") pod \"fa7eae7d-b662-434d-96c1-de3080d579bd\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.506595 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-run-ovn\") pod \"fa7eae7d-b662-434d-96c1-de3080d579bd\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.506617 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-cni-bin\") pod \"fa7eae7d-b662-434d-96c1-de3080d579bd\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.506648 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fa7eae7d-b662-434d-96c1-de3080d579bd-ovn-node-metrics-cert\") pod \"fa7eae7d-b662-434d-96c1-de3080d579bd\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.506698 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-run-systemd\") pod \"fa7eae7d-b662-434d-96c1-de3080d579bd\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.506721 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fa7eae7d-b662-434d-96c1-de3080d579bd-ovnkube-config\") pod \"fa7eae7d-b662-434d-96c1-de3080d579bd\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.506739 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-systemd-units\") pod \"fa7eae7d-b662-434d-96c1-de3080d579bd\" (UID: \"fa7eae7d-b662-434d-96c1-de3080d579bd\") " Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.507241 4693 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.507700 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "fa7eae7d-b662-434d-96c1-de3080d579bd" (UID: "fa7eae7d-b662-434d-96c1-de3080d579bd"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.507733 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "fa7eae7d-b662-434d-96c1-de3080d579bd" (UID: "fa7eae7d-b662-434d-96c1-de3080d579bd"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.507754 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-slash" (OuterVolumeSpecName: "host-slash") pod "fa7eae7d-b662-434d-96c1-de3080d579bd" (UID: "fa7eae7d-b662-434d-96c1-de3080d579bd"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.507775 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "fa7eae7d-b662-434d-96c1-de3080d579bd" (UID: "fa7eae7d-b662-434d-96c1-de3080d579bd"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.508381 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-node-log" (OuterVolumeSpecName: "node-log") pod "fa7eae7d-b662-434d-96c1-de3080d579bd" (UID: "fa7eae7d-b662-434d-96c1-de3080d579bd"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.508771 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-log-socket" (OuterVolumeSpecName: "log-socket") pod "fa7eae7d-b662-434d-96c1-de3080d579bd" (UID: "fa7eae7d-b662-434d-96c1-de3080d579bd"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.508800 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "fa7eae7d-b662-434d-96c1-de3080d579bd" (UID: "fa7eae7d-b662-434d-96c1-de3080d579bd"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.509146 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa7eae7d-b662-434d-96c1-de3080d579bd-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "fa7eae7d-b662-434d-96c1-de3080d579bd" (UID: "fa7eae7d-b662-434d-96c1-de3080d579bd"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.509179 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "fa7eae7d-b662-434d-96c1-de3080d579bd" (UID: "fa7eae7d-b662-434d-96c1-de3080d579bd"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.509205 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "fa7eae7d-b662-434d-96c1-de3080d579bd" (UID: "fa7eae7d-b662-434d-96c1-de3080d579bd"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.509497 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa7eae7d-b662-434d-96c1-de3080d579bd-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "fa7eae7d-b662-434d-96c1-de3080d579bd" (UID: "fa7eae7d-b662-434d-96c1-de3080d579bd"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.509527 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "fa7eae7d-b662-434d-96c1-de3080d579bd" (UID: "fa7eae7d-b662-434d-96c1-de3080d579bd"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.509956 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "fa7eae7d-b662-434d-96c1-de3080d579bd" (UID: "fa7eae7d-b662-434d-96c1-de3080d579bd"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.509994 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "fa7eae7d-b662-434d-96c1-de3080d579bd" (UID: "fa7eae7d-b662-434d-96c1-de3080d579bd"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.510333 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "fa7eae7d-b662-434d-96c1-de3080d579bd" (UID: "fa7eae7d-b662-434d-96c1-de3080d579bd"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.510677 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa7eae7d-b662-434d-96c1-de3080d579bd-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "fa7eae7d-b662-434d-96c1-de3080d579bd" (UID: "fa7eae7d-b662-434d-96c1-de3080d579bd"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.514537 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa7eae7d-b662-434d-96c1-de3080d579bd-kube-api-access-dwpht" (OuterVolumeSpecName: "kube-api-access-dwpht") pod "fa7eae7d-b662-434d-96c1-de3080d579bd" (UID: "fa7eae7d-b662-434d-96c1-de3080d579bd"). InnerVolumeSpecName "kube-api-access-dwpht". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.516868 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.518041 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa7eae7d-b662-434d-96c1-de3080d579bd-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "fa7eae7d-b662-434d-96c1-de3080d579bd" (UID: "fa7eae7d-b662-434d-96c1-de3080d579bd"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.554264 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "fa7eae7d-b662-434d-96c1-de3080d579bd" (UID: "fa7eae7d-b662-434d-96c1-de3080d579bd"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.609134 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-host-run-ovn-kubernetes\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.609182 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-systemd-units\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.609217 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-run-systemd\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.609327 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-run-openvswitch\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.609390 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-var-lib-openvswitch\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.609486 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4smr\" (UniqueName: \"kubernetes.io/projected/b553b341-1cfb-4099-8317-9c9b382ea7dc-kube-api-access-w4smr\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.609530 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-host-kubelet\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.609587 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-host-cni-bin\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.609615 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-host-cni-netd\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.609638 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-etc-openvswitch\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.609723 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b553b341-1cfb-4099-8317-9c9b382ea7dc-ovnkube-config\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.609749 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b553b341-1cfb-4099-8317-9c9b382ea7dc-ovnkube-script-lib\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.609788 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-log-socket\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.609808 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.609831 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-node-log\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.609893 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-run-ovn\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.609941 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b553b341-1cfb-4099-8317-9c9b382ea7dc-env-overrides\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.609990 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b553b341-1cfb-4099-8317-9c9b382ea7dc-ovn-node-metrics-cert\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.610011 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-host-run-netns\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.610040 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-host-slash\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.610146 4693 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.610159 4693 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.610170 4693 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-slash\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.610178 4693 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.610188 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwpht\" (UniqueName: \"kubernetes.io/projected/fa7eae7d-b662-434d-96c1-de3080d579bd-kube-api-access-dwpht\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.610211 4693 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-log-socket\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.610220 4693 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.610228 4693 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fa7eae7d-b662-434d-96c1-de3080d579bd-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.610236 4693 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.610245 4693 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.610253 4693 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fa7eae7d-b662-434d-96c1-de3080d579bd-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.610262 4693 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.610286 4693 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-node-log\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.610294 4693 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.610301 4693 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.610309 4693 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fa7eae7d-b662-434d-96c1-de3080d579bd-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.610317 4693 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.610325 4693 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fa7eae7d-b662-434d-96c1-de3080d579bd-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.610333 4693 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fa7eae7d-b662-434d-96c1-de3080d579bd-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711050 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-host-run-ovn-kubernetes\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711097 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-systemd-units\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711132 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-run-systemd\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711150 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-run-openvswitch\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711172 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-var-lib-openvswitch\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711191 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4smr\" (UniqueName: \"kubernetes.io/projected/b553b341-1cfb-4099-8317-9c9b382ea7dc-kube-api-access-w4smr\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711206 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-host-kubelet\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711228 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-host-cni-bin\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711243 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-host-cni-netd\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711263 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-etc-openvswitch\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711307 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b553b341-1cfb-4099-8317-9c9b382ea7dc-ovnkube-config\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711322 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b553b341-1cfb-4099-8317-9c9b382ea7dc-ovnkube-script-lib\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711341 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-log-socket\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711358 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711376 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-node-log\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711397 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-run-ovn\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711413 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b553b341-1cfb-4099-8317-9c9b382ea7dc-env-overrides\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711427 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b553b341-1cfb-4099-8317-9c9b382ea7dc-ovn-node-metrics-cert\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711444 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-host-run-netns\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711468 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-host-slash\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711565 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-host-slash\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711607 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-host-run-ovn-kubernetes\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711632 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-systemd-units\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711656 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-run-systemd\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711681 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-run-openvswitch\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711703 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-var-lib-openvswitch\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711962 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-host-kubelet\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.711988 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-host-cni-bin\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.712007 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-host-cni-netd\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.712028 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-etc-openvswitch\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.712625 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b553b341-1cfb-4099-8317-9c9b382ea7dc-ovnkube-config\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.713016 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b553b341-1cfb-4099-8317-9c9b382ea7dc-ovnkube-script-lib\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.713054 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-log-socket\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.713077 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.713097 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-node-log\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.713116 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-run-ovn\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.713415 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b553b341-1cfb-4099-8317-9c9b382ea7dc-env-overrides\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.713457 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b553b341-1cfb-4099-8317-9c9b382ea7dc-host-run-netns\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.718181 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b553b341-1cfb-4099-8317-9c9b382ea7dc-ovn-node-metrics-cert\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.743366 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4smr\" (UniqueName: \"kubernetes.io/projected/b553b341-1cfb-4099-8317-9c9b382ea7dc-kube-api-access-w4smr\") pod \"ovnkube-node-c6f2k\" (UID: \"b553b341-1cfb-4099-8317-9c9b382ea7dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:13 crc kubenswrapper[4693]: I1212 16:00:13.860547 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.185712 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sllz5_e54028d7-cdbb-4fa9-92cd-9570edacb888/kube-multus/2.log" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.186058 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sllz5" event={"ID":"e54028d7-cdbb-4fa9-92cd-9570edacb888","Type":"ContainerStarted","Data":"d832e02230a36d0a98248c68d75716391f268a1ac987d377fb1c2532e58c9bbf"} Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.187702 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" event={"ID":"b553b341-1cfb-4099-8317-9c9b382ea7dc","Type":"ContainerStarted","Data":"79cbc344a24af113f6cf3cc568d6bd4876e17e0bba7e92cfa6ca2c25eafa3ee5"} Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.187723 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" event={"ID":"b553b341-1cfb-4099-8317-9c9b382ea7dc","Type":"ContainerStarted","Data":"92d13dbed586027db956ca9ab3d68b993a650242c3a63a4299751d8554dd8580"} Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.191085 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ps9gt_fa7eae7d-b662-434d-96c1-de3080d579bd/ovn-acl-logging/0.log" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.191430 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ps9gt_fa7eae7d-b662-434d-96c1-de3080d579bd/ovn-controller/0.log" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.192509 4693 generic.go:334] "Generic (PLEG): container finished" podID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerID="f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed" exitCode=0 Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.192531 4693 generic.go:334] "Generic (PLEG): container finished" podID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerID="8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c" exitCode=0 Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.192539 4693 generic.go:334] "Generic (PLEG): container finished" podID="fa7eae7d-b662-434d-96c1-de3080d579bd" containerID="9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92" exitCode=0 Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.192564 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerDied","Data":"f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed"} Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.192584 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerDied","Data":"8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c"} Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.192594 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerDied","Data":"9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92"} Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.192602 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" event={"ID":"fa7eae7d-b662-434d-96c1-de3080d579bd","Type":"ContainerDied","Data":"11f8bc93ddecaef42867a9f1162942a85913bddc365c797db22423b8bdf5e5aa"} Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.192618 4693 scope.go:117] "RemoveContainer" containerID="444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.192679 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ps9gt" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.293152 4693 scope.go:117] "RemoveContainer" containerID="f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.311506 4693 scope.go:117] "RemoveContainer" containerID="1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.357506 4693 scope.go:117] "RemoveContainer" containerID="8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.382431 4693 scope.go:117] "RemoveContainer" containerID="ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.408138 4693 scope.go:117] "RemoveContainer" containerID="9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.417742 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ps9gt"] Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.425709 4693 scope.go:117] "RemoveContainer" containerID="54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.438387 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ps9gt"] Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.467473 4693 scope.go:117] "RemoveContainer" containerID="201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.489388 4693 scope.go:117] "RemoveContainer" containerID="4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.551981 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xs4tf" podUID="95322c55-e1a1-41d4-b362-bfa4200629f5" containerName="registry-server" probeResult="failure" output=< Dec 12 16:00:14 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 16:00:14 crc kubenswrapper[4693]: > Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.554866 4693 scope.go:117] "RemoveContainer" containerID="444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25" Dec 12 16:00:14 crc kubenswrapper[4693]: E1212 16:00:14.555340 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25\": container with ID starting with 444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25 not found: ID does not exist" containerID="444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.555387 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25"} err="failed to get container status \"444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25\": rpc error: code = NotFound desc = could not find container \"444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25\": container with ID starting with 444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25 not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.555427 4693 scope.go:117] "RemoveContainer" containerID="f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed" Dec 12 16:00:14 crc kubenswrapper[4693]: E1212 16:00:14.555809 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\": container with ID starting with f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed not found: ID does not exist" containerID="f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.555859 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed"} err="failed to get container status \"f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\": rpc error: code = NotFound desc = could not find container \"f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\": container with ID starting with f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.555884 4693 scope.go:117] "RemoveContainer" containerID="1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a" Dec 12 16:00:14 crc kubenswrapper[4693]: E1212 16:00:14.556516 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\": container with ID starting with 1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a not found: ID does not exist" containerID="1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.556548 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a"} err="failed to get container status \"1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\": rpc error: code = NotFound desc = could not find container \"1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\": container with ID starting with 1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.556574 4693 scope.go:117] "RemoveContainer" containerID="8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c" Dec 12 16:00:14 crc kubenswrapper[4693]: E1212 16:00:14.557315 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\": container with ID starting with 8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c not found: ID does not exist" containerID="8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.557372 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c"} err="failed to get container status \"8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\": rpc error: code = NotFound desc = could not find container \"8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\": container with ID starting with 8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.557403 4693 scope.go:117] "RemoveContainer" containerID="ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e" Dec 12 16:00:14 crc kubenswrapper[4693]: E1212 16:00:14.557757 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\": container with ID starting with ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e not found: ID does not exist" containerID="ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.557785 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e"} err="failed to get container status \"ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\": rpc error: code = NotFound desc = could not find container \"ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\": container with ID starting with ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.557802 4693 scope.go:117] "RemoveContainer" containerID="9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92" Dec 12 16:00:14 crc kubenswrapper[4693]: E1212 16:00:14.558083 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\": container with ID starting with 9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92 not found: ID does not exist" containerID="9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.558117 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92"} err="failed to get container status \"9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\": rpc error: code = NotFound desc = could not find container \"9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\": container with ID starting with 9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92 not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.558138 4693 scope.go:117] "RemoveContainer" containerID="54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48" Dec 12 16:00:14 crc kubenswrapper[4693]: E1212 16:00:14.558404 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\": container with ID starting with 54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48 not found: ID does not exist" containerID="54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.558430 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48"} err="failed to get container status \"54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\": rpc error: code = NotFound desc = could not find container \"54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\": container with ID starting with 54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48 not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.558445 4693 scope.go:117] "RemoveContainer" containerID="201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05" Dec 12 16:00:14 crc kubenswrapper[4693]: E1212 16:00:14.558708 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\": container with ID starting with 201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05 not found: ID does not exist" containerID="201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.558740 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05"} err="failed to get container status \"201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\": rpc error: code = NotFound desc = could not find container \"201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\": container with ID starting with 201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05 not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.558782 4693 scope.go:117] "RemoveContainer" containerID="4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b" Dec 12 16:00:14 crc kubenswrapper[4693]: E1212 16:00:14.559018 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\": container with ID starting with 4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b not found: ID does not exist" containerID="4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.559041 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b"} err="failed to get container status \"4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\": rpc error: code = NotFound desc = could not find container \"4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\": container with ID starting with 4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.559055 4693 scope.go:117] "RemoveContainer" containerID="444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.559290 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25"} err="failed to get container status \"444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25\": rpc error: code = NotFound desc = could not find container \"444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25\": container with ID starting with 444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25 not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.559320 4693 scope.go:117] "RemoveContainer" containerID="f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.559564 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed"} err="failed to get container status \"f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\": rpc error: code = NotFound desc = could not find container \"f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\": container with ID starting with f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.559586 4693 scope.go:117] "RemoveContainer" containerID="1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.559788 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a"} err="failed to get container status \"1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\": rpc error: code = NotFound desc = could not find container \"1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\": container with ID starting with 1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.559811 4693 scope.go:117] "RemoveContainer" containerID="8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.560041 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c"} err="failed to get container status \"8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\": rpc error: code = NotFound desc = could not find container \"8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\": container with ID starting with 8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.560061 4693 scope.go:117] "RemoveContainer" containerID="ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.560245 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e"} err="failed to get container status \"ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\": rpc error: code = NotFound desc = could not find container \"ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\": container with ID starting with ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.560264 4693 scope.go:117] "RemoveContainer" containerID="9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.560496 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92"} err="failed to get container status \"9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\": rpc error: code = NotFound desc = could not find container \"9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\": container with ID starting with 9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92 not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.560545 4693 scope.go:117] "RemoveContainer" containerID="54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.560833 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48"} err="failed to get container status \"54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\": rpc error: code = NotFound desc = could not find container \"54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\": container with ID starting with 54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48 not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.560871 4693 scope.go:117] "RemoveContainer" containerID="201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.561104 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05"} err="failed to get container status \"201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\": rpc error: code = NotFound desc = could not find container \"201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\": container with ID starting with 201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05 not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.561124 4693 scope.go:117] "RemoveContainer" containerID="4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.561373 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b"} err="failed to get container status \"4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\": rpc error: code = NotFound desc = could not find container \"4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\": container with ID starting with 4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.561391 4693 scope.go:117] "RemoveContainer" containerID="444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.561581 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25"} err="failed to get container status \"444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25\": rpc error: code = NotFound desc = could not find container \"444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25\": container with ID starting with 444adf9a1fb3a3f9937cbb0372fec3e997a0b018e303d94e917a4bdebe49fd25 not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.561601 4693 scope.go:117] "RemoveContainer" containerID="f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.562192 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed"} err="failed to get container status \"f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\": rpc error: code = NotFound desc = could not find container \"f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed\": container with ID starting with f884937b6ecb88cd34f438780ea843dfcde47b7d93a524653f6692d8d95821ed not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.562213 4693 scope.go:117] "RemoveContainer" containerID="1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.563228 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a"} err="failed to get container status \"1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\": rpc error: code = NotFound desc = could not find container \"1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a\": container with ID starting with 1440bb8ad4c06e1177868f69d8e715d8a1e74345fc47f1b317abe1499e51d80a not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.563263 4693 scope.go:117] "RemoveContainer" containerID="8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.563492 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c"} err="failed to get container status \"8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\": rpc error: code = NotFound desc = could not find container \"8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c\": container with ID starting with 8ae519785e9d35261ec2d558e924cbd856508f101578f52e32c8675417f3f63c not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.563515 4693 scope.go:117] "RemoveContainer" containerID="ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.563876 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e"} err="failed to get container status \"ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\": rpc error: code = NotFound desc = could not find container \"ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e\": container with ID starting with ca77fd0a98d104bc08271999e7de6bbdf82b43390f36ef278eae8fe76696423e not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.563900 4693 scope.go:117] "RemoveContainer" containerID="9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.564198 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92"} err="failed to get container status \"9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\": rpc error: code = NotFound desc = could not find container \"9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92\": container with ID starting with 9d507db6066537ec1318f4d6fdb424b06beab64fd7ee1aeff6d046408ef13c92 not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.564219 4693 scope.go:117] "RemoveContainer" containerID="54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.564505 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48"} err="failed to get container status \"54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\": rpc error: code = NotFound desc = could not find container \"54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48\": container with ID starting with 54ade04e81b1f5f7414dc97801df80e787023dcf331d781b26edbf33e106fd48 not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.564527 4693 scope.go:117] "RemoveContainer" containerID="201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.564767 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05"} err="failed to get container status \"201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\": rpc error: code = NotFound desc = could not find container \"201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05\": container with ID starting with 201d942d0eb5502227267e39b68d47360821185e7482ef8611146dff4805cf05 not found: ID does not exist" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.564804 4693 scope.go:117] "RemoveContainer" containerID="4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b" Dec 12 16:00:14 crc kubenswrapper[4693]: I1212 16:00:14.565010 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b"} err="failed to get container status \"4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\": rpc error: code = NotFound desc = could not find container \"4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b\": container with ID starting with 4e3bb9823e133a98eaca2538af8829de7465625d08ecd55576ce2b8a90be171b not found: ID does not exist" Dec 12 16:00:15 crc kubenswrapper[4693]: I1212 16:00:15.198373 4693 generic.go:334] "Generic (PLEG): container finished" podID="b553b341-1cfb-4099-8317-9c9b382ea7dc" containerID="79cbc344a24af113f6cf3cc568d6bd4876e17e0bba7e92cfa6ca2c25eafa3ee5" exitCode=0 Dec 12 16:00:15 crc kubenswrapper[4693]: I1212 16:00:15.198466 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" event={"ID":"b553b341-1cfb-4099-8317-9c9b382ea7dc","Type":"ContainerDied","Data":"79cbc344a24af113f6cf3cc568d6bd4876e17e0bba7e92cfa6ca2c25eafa3ee5"} Dec 12 16:00:15 crc kubenswrapper[4693]: I1212 16:00:15.364025 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa7eae7d-b662-434d-96c1-de3080d579bd" path="/var/lib/kubelet/pods/fa7eae7d-b662-434d-96c1-de3080d579bd/volumes" Dec 12 16:00:16 crc kubenswrapper[4693]: I1212 16:00:16.232413 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" event={"ID":"b553b341-1cfb-4099-8317-9c9b382ea7dc","Type":"ContainerStarted","Data":"93dcf40dab65f40a1a3cdc16eded7288796ecb22b72dc82030b44c40c41991b2"} Dec 12 16:00:16 crc kubenswrapper[4693]: I1212 16:00:16.233210 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" event={"ID":"b553b341-1cfb-4099-8317-9c9b382ea7dc","Type":"ContainerStarted","Data":"7254e3f04b3ef27e3a1fbf49847867b724af5c44de827986873c50662aa5f660"} Dec 12 16:00:16 crc kubenswrapper[4693]: I1212 16:00:16.233302 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" event={"ID":"b553b341-1cfb-4099-8317-9c9b382ea7dc","Type":"ContainerStarted","Data":"db8fa63cfb3aa56ad2947e982be769adfb082fef6f99f00fbaad6077629f8982"} Dec 12 16:00:17 crc kubenswrapper[4693]: I1212 16:00:17.241577 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" event={"ID":"b553b341-1cfb-4099-8317-9c9b382ea7dc","Type":"ContainerStarted","Data":"6aaaa26ad55c1329f0c2b17cd4d28b6c760f2e2d69f5955e85491f9847319068"} Dec 12 16:00:17 crc kubenswrapper[4693]: I1212 16:00:17.241907 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" event={"ID":"b553b341-1cfb-4099-8317-9c9b382ea7dc","Type":"ContainerStarted","Data":"783ba42e6f866d0e64d48b9d755509a7a11c7ec63097cb81053b12bf77c4cce2"} Dec 12 16:00:17 crc kubenswrapper[4693]: I1212 16:00:17.241923 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" event={"ID":"b553b341-1cfb-4099-8317-9c9b382ea7dc","Type":"ContainerStarted","Data":"d22bde205a35dfddfb8e3a1239d19f7f4907d9d93dd30aa06a6bee05f9b0b5fe"} Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.249165 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-xbtg7"] Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.250193 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xbtg7" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.252028 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.252394 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.255719 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-4g2lx" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.382500 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk"] Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.383424 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.385607 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-7nzmc" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.385864 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.394427 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkzs7\" (UniqueName: \"kubernetes.io/projected/f10d4e25-5e26-47fa-a4b4-443186694122-kube-api-access-mkzs7\") pod \"obo-prometheus-operator-668cf9dfbb-xbtg7\" (UID: \"f10d4e25-5e26-47fa-a4b4-443186694122\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xbtg7" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.396331 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx"] Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.397236 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.495403 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9f0ae906-0fb5-4478-808d-e6b1f5785736-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk\" (UID: \"9f0ae906-0fb5-4478-808d-e6b1f5785736\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.495545 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkzs7\" (UniqueName: \"kubernetes.io/projected/f10d4e25-5e26-47fa-a4b4-443186694122-kube-api-access-mkzs7\") pod \"obo-prometheus-operator-668cf9dfbb-xbtg7\" (UID: \"f10d4e25-5e26-47fa-a4b4-443186694122\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xbtg7" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.495629 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9f0ae906-0fb5-4478-808d-e6b1f5785736-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk\" (UID: \"9f0ae906-0fb5-4478-808d-e6b1f5785736\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.520000 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkzs7\" (UniqueName: \"kubernetes.io/projected/f10d4e25-5e26-47fa-a4b4-443186694122-kube-api-access-mkzs7\") pod \"obo-prometheus-operator-668cf9dfbb-xbtg7\" (UID: \"f10d4e25-5e26-47fa-a4b4-443186694122\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xbtg7" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.569519 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-t9zpr"] Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.570570 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.570621 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xbtg7" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.572434 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-q4jxj" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.572583 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.596519 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9f0ae906-0fb5-4478-808d-e6b1f5785736-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk\" (UID: \"9f0ae906-0fb5-4478-808d-e6b1f5785736\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.597081 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3581c9cb-4a5e-436c-969e-1a7311dbefc3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx\" (UID: \"3581c9cb-4a5e-436c-969e-1a7311dbefc3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.597119 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9f0ae906-0fb5-4478-808d-e6b1f5785736-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk\" (UID: \"9f0ae906-0fb5-4478-808d-e6b1f5785736\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.597174 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3581c9cb-4a5e-436c-969e-1a7311dbefc3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx\" (UID: \"3581c9cb-4a5e-436c-969e-1a7311dbefc3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.601050 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9f0ae906-0fb5-4478-808d-e6b1f5785736-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk\" (UID: \"9f0ae906-0fb5-4478-808d-e6b1f5785736\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.602779 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9f0ae906-0fb5-4478-808d-e6b1f5785736-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk\" (UID: \"9f0ae906-0fb5-4478-808d-e6b1f5785736\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk" Dec 12 16:00:18 crc kubenswrapper[4693]: E1212 16:00:18.604730 4693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xbtg7_openshift-operators_f10d4e25-5e26-47fa-a4b4-443186694122_0(5a7f8156d7df0ab9fa5ef9a2f6a9b49aab83b9969d119d7dcfa16157b29aba71): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 12 16:00:18 crc kubenswrapper[4693]: E1212 16:00:18.604967 4693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xbtg7_openshift-operators_f10d4e25-5e26-47fa-a4b4-443186694122_0(5a7f8156d7df0ab9fa5ef9a2f6a9b49aab83b9969d119d7dcfa16157b29aba71): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xbtg7" Dec 12 16:00:18 crc kubenswrapper[4693]: E1212 16:00:18.605059 4693 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xbtg7_openshift-operators_f10d4e25-5e26-47fa-a4b4-443186694122_0(5a7f8156d7df0ab9fa5ef9a2f6a9b49aab83b9969d119d7dcfa16157b29aba71): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xbtg7" Dec 12 16:00:18 crc kubenswrapper[4693]: E1212 16:00:18.605211 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-xbtg7_openshift-operators(f10d4e25-5e26-47fa-a4b4-443186694122)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-xbtg7_openshift-operators(f10d4e25-5e26-47fa-a4b4-443186694122)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xbtg7_openshift-operators_f10d4e25-5e26-47fa-a4b4-443186694122_0(5a7f8156d7df0ab9fa5ef9a2f6a9b49aab83b9969d119d7dcfa16157b29aba71): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xbtg7" podUID="f10d4e25-5e26-47fa-a4b4-443186694122" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.699213 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/51db3b1a-8b64-47d6-b09c-a8356e855606-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-t9zpr\" (UID: \"51db3b1a-8b64-47d6-b09c-a8356e855606\") " pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.699654 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3581c9cb-4a5e-436c-969e-1a7311dbefc3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx\" (UID: \"3581c9cb-4a5e-436c-969e-1a7311dbefc3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.699780 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27dhc\" (UniqueName: \"kubernetes.io/projected/51db3b1a-8b64-47d6-b09c-a8356e855606-kube-api-access-27dhc\") pod \"observability-operator-d8bb48f5d-t9zpr\" (UID: \"51db3b1a-8b64-47d6-b09c-a8356e855606\") " pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.699931 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3581c9cb-4a5e-436c-969e-1a7311dbefc3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx\" (UID: \"3581c9cb-4a5e-436c-969e-1a7311dbefc3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.703189 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.703892 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3581c9cb-4a5e-436c-969e-1a7311dbefc3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx\" (UID: \"3581c9cb-4a5e-436c-969e-1a7311dbefc3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.713852 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3581c9cb-4a5e-436c-969e-1a7311dbefc3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx\" (UID: \"3581c9cb-4a5e-436c-969e-1a7311dbefc3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.720606 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.727784 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-clst5"] Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.728802 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-clst5" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.732017 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-hl8bz" Dec 12 16:00:18 crc kubenswrapper[4693]: E1212 16:00:18.771190 4693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk_openshift-operators_9f0ae906-0fb5-4478-808d-e6b1f5785736_0(bbfdb1c3618257bc4f400916ff6cd116dc888e3bfd5a3125b5d92b9814085227): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 12 16:00:18 crc kubenswrapper[4693]: E1212 16:00:18.771261 4693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk_openshift-operators_9f0ae906-0fb5-4478-808d-e6b1f5785736_0(bbfdb1c3618257bc4f400916ff6cd116dc888e3bfd5a3125b5d92b9814085227): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk" Dec 12 16:00:18 crc kubenswrapper[4693]: E1212 16:00:18.771295 4693 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk_openshift-operators_9f0ae906-0fb5-4478-808d-e6b1f5785736_0(bbfdb1c3618257bc4f400916ff6cd116dc888e3bfd5a3125b5d92b9814085227): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk" Dec 12 16:00:18 crc kubenswrapper[4693]: E1212 16:00:18.771346 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk_openshift-operators(9f0ae906-0fb5-4478-808d-e6b1f5785736)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk_openshift-operators(9f0ae906-0fb5-4478-808d-e6b1f5785736)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk_openshift-operators_9f0ae906-0fb5-4478-808d-e6b1f5785736_0(bbfdb1c3618257bc4f400916ff6cd116dc888e3bfd5a3125b5d92b9814085227): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk" podUID="9f0ae906-0fb5-4478-808d-e6b1f5785736" Dec 12 16:00:18 crc kubenswrapper[4693]: E1212 16:00:18.788541 4693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx_openshift-operators_3581c9cb-4a5e-436c-969e-1a7311dbefc3_0(984e3055dd2bfb96ea4299a886cbb42a1ad8ed6f94801c285f89b1b9e3deab45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 12 16:00:18 crc kubenswrapper[4693]: E1212 16:00:18.788622 4693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx_openshift-operators_3581c9cb-4a5e-436c-969e-1a7311dbefc3_0(984e3055dd2bfb96ea4299a886cbb42a1ad8ed6f94801c285f89b1b9e3deab45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx" Dec 12 16:00:18 crc kubenswrapper[4693]: E1212 16:00:18.788651 4693 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx_openshift-operators_3581c9cb-4a5e-436c-969e-1a7311dbefc3_0(984e3055dd2bfb96ea4299a886cbb42a1ad8ed6f94801c285f89b1b9e3deab45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx" Dec 12 16:00:18 crc kubenswrapper[4693]: E1212 16:00:18.788703 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx_openshift-operators(3581c9cb-4a5e-436c-969e-1a7311dbefc3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx_openshift-operators(3581c9cb-4a5e-436c-969e-1a7311dbefc3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx_openshift-operators_3581c9cb-4a5e-436c-969e-1a7311dbefc3_0(984e3055dd2bfb96ea4299a886cbb42a1ad8ed6f94801c285f89b1b9e3deab45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx" podUID="3581c9cb-4a5e-436c-969e-1a7311dbefc3" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.801518 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/51db3b1a-8b64-47d6-b09c-a8356e855606-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-t9zpr\" (UID: \"51db3b1a-8b64-47d6-b09c-a8356e855606\") " pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.801570 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27dhc\" (UniqueName: \"kubernetes.io/projected/51db3b1a-8b64-47d6-b09c-a8356e855606-kube-api-access-27dhc\") pod \"observability-operator-d8bb48f5d-t9zpr\" (UID: \"51db3b1a-8b64-47d6-b09c-a8356e855606\") " pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.814040 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/51db3b1a-8b64-47d6-b09c-a8356e855606-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-t9zpr\" (UID: \"51db3b1a-8b64-47d6-b09c-a8356e855606\") " pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.830230 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27dhc\" (UniqueName: \"kubernetes.io/projected/51db3b1a-8b64-47d6-b09c-a8356e855606-kube-api-access-27dhc\") pod \"observability-operator-d8bb48f5d-t9zpr\" (UID: \"51db3b1a-8b64-47d6-b09c-a8356e855606\") " pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.902804 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2dd59d35-d975-49a2-8a23-db068e921965-openshift-service-ca\") pod \"perses-operator-5446b9c989-clst5\" (UID: \"2dd59d35-d975-49a2-8a23-db068e921965\") " pod="openshift-operators/perses-operator-5446b9c989-clst5" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.902894 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqqg8\" (UniqueName: \"kubernetes.io/projected/2dd59d35-d975-49a2-8a23-db068e921965-kube-api-access-vqqg8\") pod \"perses-operator-5446b9c989-clst5\" (UID: \"2dd59d35-d975-49a2-8a23-db068e921965\") " pod="openshift-operators/perses-operator-5446b9c989-clst5" Dec 12 16:00:18 crc kubenswrapper[4693]: I1212 16:00:18.939386 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" Dec 12 16:00:18 crc kubenswrapper[4693]: E1212 16:00:18.964636 4693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-t9zpr_openshift-operators_51db3b1a-8b64-47d6-b09c-a8356e855606_0(c27aefe692c3a6a3f1a7fab652100e54afce8e77239b4fe629a84dccb1560a26): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 12 16:00:18 crc kubenswrapper[4693]: E1212 16:00:18.964725 4693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-t9zpr_openshift-operators_51db3b1a-8b64-47d6-b09c-a8356e855606_0(c27aefe692c3a6a3f1a7fab652100e54afce8e77239b4fe629a84dccb1560a26): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" Dec 12 16:00:18 crc kubenswrapper[4693]: E1212 16:00:18.964749 4693 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-t9zpr_openshift-operators_51db3b1a-8b64-47d6-b09c-a8356e855606_0(c27aefe692c3a6a3f1a7fab652100e54afce8e77239b4fe629a84dccb1560a26): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" Dec 12 16:00:18 crc kubenswrapper[4693]: E1212 16:00:18.964796 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-t9zpr_openshift-operators(51db3b1a-8b64-47d6-b09c-a8356e855606)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-t9zpr_openshift-operators(51db3b1a-8b64-47d6-b09c-a8356e855606)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-t9zpr_openshift-operators_51db3b1a-8b64-47d6-b09c-a8356e855606_0(c27aefe692c3a6a3f1a7fab652100e54afce8e77239b4fe629a84dccb1560a26): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" podUID="51db3b1a-8b64-47d6-b09c-a8356e855606" Dec 12 16:00:19 crc kubenswrapper[4693]: I1212 16:00:19.004762 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqqg8\" (UniqueName: \"kubernetes.io/projected/2dd59d35-d975-49a2-8a23-db068e921965-kube-api-access-vqqg8\") pod \"perses-operator-5446b9c989-clst5\" (UID: \"2dd59d35-d975-49a2-8a23-db068e921965\") " pod="openshift-operators/perses-operator-5446b9c989-clst5" Dec 12 16:00:19 crc kubenswrapper[4693]: I1212 16:00:19.004869 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2dd59d35-d975-49a2-8a23-db068e921965-openshift-service-ca\") pod \"perses-operator-5446b9c989-clst5\" (UID: \"2dd59d35-d975-49a2-8a23-db068e921965\") " pod="openshift-operators/perses-operator-5446b9c989-clst5" Dec 12 16:00:19 crc kubenswrapper[4693]: I1212 16:00:19.005941 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2dd59d35-d975-49a2-8a23-db068e921965-openshift-service-ca\") pod \"perses-operator-5446b9c989-clst5\" (UID: \"2dd59d35-d975-49a2-8a23-db068e921965\") " pod="openshift-operators/perses-operator-5446b9c989-clst5" Dec 12 16:00:19 crc kubenswrapper[4693]: I1212 16:00:19.024480 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqqg8\" (UniqueName: \"kubernetes.io/projected/2dd59d35-d975-49a2-8a23-db068e921965-kube-api-access-vqqg8\") pod \"perses-operator-5446b9c989-clst5\" (UID: \"2dd59d35-d975-49a2-8a23-db068e921965\") " pod="openshift-operators/perses-operator-5446b9c989-clst5" Dec 12 16:00:19 crc kubenswrapper[4693]: I1212 16:00:19.101629 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-clst5" Dec 12 16:00:19 crc kubenswrapper[4693]: E1212 16:00:19.124258 4693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-clst5_openshift-operators_2dd59d35-d975-49a2-8a23-db068e921965_0(e725beb7987e316c8e3cc614eb7186fbcaae87a951235eb98e342bfa569b273e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 12 16:00:19 crc kubenswrapper[4693]: E1212 16:00:19.124347 4693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-clst5_openshift-operators_2dd59d35-d975-49a2-8a23-db068e921965_0(e725beb7987e316c8e3cc614eb7186fbcaae87a951235eb98e342bfa569b273e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-clst5" Dec 12 16:00:19 crc kubenswrapper[4693]: E1212 16:00:19.124368 4693 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-clst5_openshift-operators_2dd59d35-d975-49a2-8a23-db068e921965_0(e725beb7987e316c8e3cc614eb7186fbcaae87a951235eb98e342bfa569b273e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-clst5" Dec 12 16:00:19 crc kubenswrapper[4693]: E1212 16:00:19.124416 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-clst5_openshift-operators(2dd59d35-d975-49a2-8a23-db068e921965)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-clst5_openshift-operators(2dd59d35-d975-49a2-8a23-db068e921965)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-clst5_openshift-operators_2dd59d35-d975-49a2-8a23-db068e921965_0(e725beb7987e316c8e3cc614eb7186fbcaae87a951235eb98e342bfa569b273e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-clst5" podUID="2dd59d35-d975-49a2-8a23-db068e921965" Dec 12 16:00:21 crc kubenswrapper[4693]: I1212 16:00:21.272059 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" event={"ID":"b553b341-1cfb-4099-8317-9c9b382ea7dc","Type":"ContainerStarted","Data":"7a895fc2d3e63454d42e53cc9c05626a13776752ada6a8d7d95a0d6857b92654"} Dec 12 16:00:23 crc kubenswrapper[4693]: I1212 16:00:23.287593 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" event={"ID":"b553b341-1cfb-4099-8317-9c9b382ea7dc","Type":"ContainerStarted","Data":"2c6b1a767894f3f81bcd33f577fe656d65418174c6f316d3434c7f917bb95c6a"} Dec 12 16:00:23 crc kubenswrapper[4693]: I1212 16:00:23.288169 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:23 crc kubenswrapper[4693]: I1212 16:00:23.288254 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:23 crc kubenswrapper[4693]: I1212 16:00:23.288355 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:23 crc kubenswrapper[4693]: I1212 16:00:23.322079 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:23 crc kubenswrapper[4693]: I1212 16:00:23.332055 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" podStartSLOduration=10.33203442 podStartE2EDuration="10.33203442s" podCreationTimestamp="2025-12-12 16:00:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:00:23.329131091 +0000 UTC m=+850.497770722" watchObservedRunningTime="2025-12-12 16:00:23.33203442 +0000 UTC m=+850.500674021" Dec 12 16:00:23 crc kubenswrapper[4693]: I1212 16:00:23.380952 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:23 crc kubenswrapper[4693]: I1212 16:00:23.551870 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xs4tf" Dec 12 16:00:23 crc kubenswrapper[4693]: I1212 16:00:23.604576 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xs4tf" Dec 12 16:00:23 crc kubenswrapper[4693]: I1212 16:00:23.794426 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xs4tf"] Dec 12 16:00:24 crc kubenswrapper[4693]: I1212 16:00:24.966949 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-xbtg7"] Dec 12 16:00:24 crc kubenswrapper[4693]: I1212 16:00:24.968092 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xbtg7" Dec 12 16:00:24 crc kubenswrapper[4693]: I1212 16:00:24.968926 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xbtg7" Dec 12 16:00:24 crc kubenswrapper[4693]: I1212 16:00:24.970986 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk"] Dec 12 16:00:24 crc kubenswrapper[4693]: I1212 16:00:24.971136 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk" Dec 12 16:00:24 crc kubenswrapper[4693]: I1212 16:00:24.971678 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk" Dec 12 16:00:24 crc kubenswrapper[4693]: I1212 16:00:24.975910 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-t9zpr"] Dec 12 16:00:24 crc kubenswrapper[4693]: I1212 16:00:24.976025 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" Dec 12 16:00:24 crc kubenswrapper[4693]: I1212 16:00:24.993880 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" Dec 12 16:00:25 crc kubenswrapper[4693]: I1212 16:00:25.004111 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx"] Dec 12 16:00:25 crc kubenswrapper[4693]: I1212 16:00:25.004314 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx" Dec 12 16:00:25 crc kubenswrapper[4693]: I1212 16:00:25.004869 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx" Dec 12 16:00:25 crc kubenswrapper[4693]: E1212 16:00:25.028436 4693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xbtg7_openshift-operators_f10d4e25-5e26-47fa-a4b4-443186694122_0(965e790cab55081ae4df001f23dbdb1b5232c4886538669eb95a12573813405c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 12 16:00:25 crc kubenswrapper[4693]: E1212 16:00:25.028502 4693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xbtg7_openshift-operators_f10d4e25-5e26-47fa-a4b4-443186694122_0(965e790cab55081ae4df001f23dbdb1b5232c4886538669eb95a12573813405c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xbtg7" Dec 12 16:00:25 crc kubenswrapper[4693]: E1212 16:00:25.028523 4693 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xbtg7_openshift-operators_f10d4e25-5e26-47fa-a4b4-443186694122_0(965e790cab55081ae4df001f23dbdb1b5232c4886538669eb95a12573813405c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xbtg7" Dec 12 16:00:25 crc kubenswrapper[4693]: E1212 16:00:25.028573 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-xbtg7_openshift-operators(f10d4e25-5e26-47fa-a4b4-443186694122)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-xbtg7_openshift-operators(f10d4e25-5e26-47fa-a4b4-443186694122)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xbtg7_openshift-operators_f10d4e25-5e26-47fa-a4b4-443186694122_0(965e790cab55081ae4df001f23dbdb1b5232c4886538669eb95a12573813405c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xbtg7" podUID="f10d4e25-5e26-47fa-a4b4-443186694122" Dec 12 16:00:25 crc kubenswrapper[4693]: E1212 16:00:25.059687 4693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk_openshift-operators_9f0ae906-0fb5-4478-808d-e6b1f5785736_0(429a6022beca4c7f1deccabbc05a17da397f0095795d8ff2b688a60b189f8dce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 12 16:00:25 crc kubenswrapper[4693]: E1212 16:00:25.059771 4693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk_openshift-operators_9f0ae906-0fb5-4478-808d-e6b1f5785736_0(429a6022beca4c7f1deccabbc05a17da397f0095795d8ff2b688a60b189f8dce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk" Dec 12 16:00:25 crc kubenswrapper[4693]: E1212 16:00:25.059802 4693 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk_openshift-operators_9f0ae906-0fb5-4478-808d-e6b1f5785736_0(429a6022beca4c7f1deccabbc05a17da397f0095795d8ff2b688a60b189f8dce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk" Dec 12 16:00:25 crc kubenswrapper[4693]: E1212 16:00:25.059855 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk_openshift-operators(9f0ae906-0fb5-4478-808d-e6b1f5785736)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk_openshift-operators(9f0ae906-0fb5-4478-808d-e6b1f5785736)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk_openshift-operators_9f0ae906-0fb5-4478-808d-e6b1f5785736_0(429a6022beca4c7f1deccabbc05a17da397f0095795d8ff2b688a60b189f8dce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk" podUID="9f0ae906-0fb5-4478-808d-e6b1f5785736" Dec 12 16:00:25 crc kubenswrapper[4693]: I1212 16:00:25.060694 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-clst5"] Dec 12 16:00:25 crc kubenswrapper[4693]: I1212 16:00:25.060832 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-clst5" Dec 12 16:00:25 crc kubenswrapper[4693]: I1212 16:00:25.061289 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-clst5" Dec 12 16:00:25 crc kubenswrapper[4693]: E1212 16:00:25.099161 4693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-t9zpr_openshift-operators_51db3b1a-8b64-47d6-b09c-a8356e855606_0(4cc374aedf27d977a70a15b39ca247a65a6f6871435459fc1a30472ed6431b5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 12 16:00:25 crc kubenswrapper[4693]: E1212 16:00:25.099222 4693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-t9zpr_openshift-operators_51db3b1a-8b64-47d6-b09c-a8356e855606_0(4cc374aedf27d977a70a15b39ca247a65a6f6871435459fc1a30472ed6431b5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" Dec 12 16:00:25 crc kubenswrapper[4693]: E1212 16:00:25.099603 4693 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-t9zpr_openshift-operators_51db3b1a-8b64-47d6-b09c-a8356e855606_0(4cc374aedf27d977a70a15b39ca247a65a6f6871435459fc1a30472ed6431b5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" Dec 12 16:00:25 crc kubenswrapper[4693]: E1212 16:00:25.099681 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-t9zpr_openshift-operators(51db3b1a-8b64-47d6-b09c-a8356e855606)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-t9zpr_openshift-operators(51db3b1a-8b64-47d6-b09c-a8356e855606)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-t9zpr_openshift-operators_51db3b1a-8b64-47d6-b09c-a8356e855606_0(4cc374aedf27d977a70a15b39ca247a65a6f6871435459fc1a30472ed6431b5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" podUID="51db3b1a-8b64-47d6-b09c-a8356e855606" Dec 12 16:00:25 crc kubenswrapper[4693]: E1212 16:00:25.106983 4693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx_openshift-operators_3581c9cb-4a5e-436c-969e-1a7311dbefc3_0(afe5f9fd1cccf7557539b5b07a0b47a992efcb78cbeafa5519a7e775a6a4ec61): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 12 16:00:25 crc kubenswrapper[4693]: E1212 16:00:25.107057 4693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx_openshift-operators_3581c9cb-4a5e-436c-969e-1a7311dbefc3_0(afe5f9fd1cccf7557539b5b07a0b47a992efcb78cbeafa5519a7e775a6a4ec61): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx" Dec 12 16:00:25 crc kubenswrapper[4693]: E1212 16:00:25.107082 4693 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx_openshift-operators_3581c9cb-4a5e-436c-969e-1a7311dbefc3_0(afe5f9fd1cccf7557539b5b07a0b47a992efcb78cbeafa5519a7e775a6a4ec61): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx" Dec 12 16:00:25 crc kubenswrapper[4693]: E1212 16:00:25.107132 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx_openshift-operators(3581c9cb-4a5e-436c-969e-1a7311dbefc3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx_openshift-operators(3581c9cb-4a5e-436c-969e-1a7311dbefc3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx_openshift-operators_3581c9cb-4a5e-436c-969e-1a7311dbefc3_0(afe5f9fd1cccf7557539b5b07a0b47a992efcb78cbeafa5519a7e775a6a4ec61): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx" podUID="3581c9cb-4a5e-436c-969e-1a7311dbefc3" Dec 12 16:00:25 crc kubenswrapper[4693]: E1212 16:00:25.151230 4693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-clst5_openshift-operators_2dd59d35-d975-49a2-8a23-db068e921965_0(a5bcba61562dcfbab9fbd7e8d89721263dabf9f88c54a4610db41a2bf0def621): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 12 16:00:25 crc kubenswrapper[4693]: E1212 16:00:25.151336 4693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-clst5_openshift-operators_2dd59d35-d975-49a2-8a23-db068e921965_0(a5bcba61562dcfbab9fbd7e8d89721263dabf9f88c54a4610db41a2bf0def621): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-clst5" Dec 12 16:00:25 crc kubenswrapper[4693]: E1212 16:00:25.151363 4693 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-clst5_openshift-operators_2dd59d35-d975-49a2-8a23-db068e921965_0(a5bcba61562dcfbab9fbd7e8d89721263dabf9f88c54a4610db41a2bf0def621): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-clst5" Dec 12 16:00:25 crc kubenswrapper[4693]: E1212 16:00:25.151420 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-clst5_openshift-operators(2dd59d35-d975-49a2-8a23-db068e921965)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-clst5_openshift-operators(2dd59d35-d975-49a2-8a23-db068e921965)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-clst5_openshift-operators_2dd59d35-d975-49a2-8a23-db068e921965_0(a5bcba61562dcfbab9fbd7e8d89721263dabf9f88c54a4610db41a2bf0def621): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-clst5" podUID="2dd59d35-d975-49a2-8a23-db068e921965" Dec 12 16:00:25 crc kubenswrapper[4693]: I1212 16:00:25.296937 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xs4tf" podUID="95322c55-e1a1-41d4-b362-bfa4200629f5" containerName="registry-server" containerID="cri-o://70d384cc01cca49a3a5d958389bb806ded4685a92a5b7f7682f7e99ee6f05c34" gracePeriod=2 Dec 12 16:00:25 crc kubenswrapper[4693]: I1212 16:00:25.964299 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xs4tf" Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.076460 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95322c55-e1a1-41d4-b362-bfa4200629f5-utilities\") pod \"95322c55-e1a1-41d4-b362-bfa4200629f5\" (UID: \"95322c55-e1a1-41d4-b362-bfa4200629f5\") " Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.076525 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzhtj\" (UniqueName: \"kubernetes.io/projected/95322c55-e1a1-41d4-b362-bfa4200629f5-kube-api-access-gzhtj\") pod \"95322c55-e1a1-41d4-b362-bfa4200629f5\" (UID: \"95322c55-e1a1-41d4-b362-bfa4200629f5\") " Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.076608 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95322c55-e1a1-41d4-b362-bfa4200629f5-catalog-content\") pod \"95322c55-e1a1-41d4-b362-bfa4200629f5\" (UID: \"95322c55-e1a1-41d4-b362-bfa4200629f5\") " Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.077881 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95322c55-e1a1-41d4-b362-bfa4200629f5-utilities" (OuterVolumeSpecName: "utilities") pod "95322c55-e1a1-41d4-b362-bfa4200629f5" (UID: "95322c55-e1a1-41d4-b362-bfa4200629f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.083092 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95322c55-e1a1-41d4-b362-bfa4200629f5-kube-api-access-gzhtj" (OuterVolumeSpecName: "kube-api-access-gzhtj") pod "95322c55-e1a1-41d4-b362-bfa4200629f5" (UID: "95322c55-e1a1-41d4-b362-bfa4200629f5"). InnerVolumeSpecName "kube-api-access-gzhtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.177188 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95322c55-e1a1-41d4-b362-bfa4200629f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95322c55-e1a1-41d4-b362-bfa4200629f5" (UID: "95322c55-e1a1-41d4-b362-bfa4200629f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.179140 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95322c55-e1a1-41d4-b362-bfa4200629f5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.179169 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95322c55-e1a1-41d4-b362-bfa4200629f5-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.179179 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzhtj\" (UniqueName: \"kubernetes.io/projected/95322c55-e1a1-41d4-b362-bfa4200629f5-kube-api-access-gzhtj\") on node \"crc\" DevicePath \"\"" Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.304246 4693 generic.go:334] "Generic (PLEG): container finished" podID="95322c55-e1a1-41d4-b362-bfa4200629f5" containerID="70d384cc01cca49a3a5d958389bb806ded4685a92a5b7f7682f7e99ee6f05c34" exitCode=0 Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.304303 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xs4tf" event={"ID":"95322c55-e1a1-41d4-b362-bfa4200629f5","Type":"ContainerDied","Data":"70d384cc01cca49a3a5d958389bb806ded4685a92a5b7f7682f7e99ee6f05c34"} Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.304349 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xs4tf" event={"ID":"95322c55-e1a1-41d4-b362-bfa4200629f5","Type":"ContainerDied","Data":"2affd4bd3b73321fb6ec37401e52a24e75ed6c5a9222d4d5c5fe67964d6ec29b"} Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.304371 4693 scope.go:117] "RemoveContainer" containerID="70d384cc01cca49a3a5d958389bb806ded4685a92a5b7f7682f7e99ee6f05c34" Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.304318 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xs4tf" Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.323432 4693 scope.go:117] "RemoveContainer" containerID="4d621771d9b3453938ad4c71f7bc9e39c6c54f75e37644365d2d60dc8d6256ec" Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.342469 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xs4tf"] Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.346152 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xs4tf"] Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.363360 4693 scope.go:117] "RemoveContainer" containerID="ee212fdca6e7b1f49aac97ed8579563c0b58e041cb76fb801cb59d6a6e129697" Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.377997 4693 scope.go:117] "RemoveContainer" containerID="70d384cc01cca49a3a5d958389bb806ded4685a92a5b7f7682f7e99ee6f05c34" Dec 12 16:00:26 crc kubenswrapper[4693]: E1212 16:00:26.378516 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70d384cc01cca49a3a5d958389bb806ded4685a92a5b7f7682f7e99ee6f05c34\": container with ID starting with 70d384cc01cca49a3a5d958389bb806ded4685a92a5b7f7682f7e99ee6f05c34 not found: ID does not exist" containerID="70d384cc01cca49a3a5d958389bb806ded4685a92a5b7f7682f7e99ee6f05c34" Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.378550 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70d384cc01cca49a3a5d958389bb806ded4685a92a5b7f7682f7e99ee6f05c34"} err="failed to get container status \"70d384cc01cca49a3a5d958389bb806ded4685a92a5b7f7682f7e99ee6f05c34\": rpc error: code = NotFound desc = could not find container \"70d384cc01cca49a3a5d958389bb806ded4685a92a5b7f7682f7e99ee6f05c34\": container with ID starting with 70d384cc01cca49a3a5d958389bb806ded4685a92a5b7f7682f7e99ee6f05c34 not found: ID does not exist" Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.378576 4693 scope.go:117] "RemoveContainer" containerID="4d621771d9b3453938ad4c71f7bc9e39c6c54f75e37644365d2d60dc8d6256ec" Dec 12 16:00:26 crc kubenswrapper[4693]: E1212 16:00:26.378803 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d621771d9b3453938ad4c71f7bc9e39c6c54f75e37644365d2d60dc8d6256ec\": container with ID starting with 4d621771d9b3453938ad4c71f7bc9e39c6c54f75e37644365d2d60dc8d6256ec not found: ID does not exist" containerID="4d621771d9b3453938ad4c71f7bc9e39c6c54f75e37644365d2d60dc8d6256ec" Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.378830 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d621771d9b3453938ad4c71f7bc9e39c6c54f75e37644365d2d60dc8d6256ec"} err="failed to get container status \"4d621771d9b3453938ad4c71f7bc9e39c6c54f75e37644365d2d60dc8d6256ec\": rpc error: code = NotFound desc = could not find container \"4d621771d9b3453938ad4c71f7bc9e39c6c54f75e37644365d2d60dc8d6256ec\": container with ID starting with 4d621771d9b3453938ad4c71f7bc9e39c6c54f75e37644365d2d60dc8d6256ec not found: ID does not exist" Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.378848 4693 scope.go:117] "RemoveContainer" containerID="ee212fdca6e7b1f49aac97ed8579563c0b58e041cb76fb801cb59d6a6e129697" Dec 12 16:00:26 crc kubenswrapper[4693]: E1212 16:00:26.379076 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee212fdca6e7b1f49aac97ed8579563c0b58e041cb76fb801cb59d6a6e129697\": container with ID starting with ee212fdca6e7b1f49aac97ed8579563c0b58e041cb76fb801cb59d6a6e129697 not found: ID does not exist" containerID="ee212fdca6e7b1f49aac97ed8579563c0b58e041cb76fb801cb59d6a6e129697" Dec 12 16:00:26 crc kubenswrapper[4693]: I1212 16:00:26.379108 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee212fdca6e7b1f49aac97ed8579563c0b58e041cb76fb801cb59d6a6e129697"} err="failed to get container status \"ee212fdca6e7b1f49aac97ed8579563c0b58e041cb76fb801cb59d6a6e129697\": rpc error: code = NotFound desc = could not find container \"ee212fdca6e7b1f49aac97ed8579563c0b58e041cb76fb801cb59d6a6e129697\": container with ID starting with ee212fdca6e7b1f49aac97ed8579563c0b58e041cb76fb801cb59d6a6e129697 not found: ID does not exist" Dec 12 16:00:27 crc kubenswrapper[4693]: I1212 16:00:27.363871 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95322c55-e1a1-41d4-b362-bfa4200629f5" path="/var/lib/kubelet/pods/95322c55-e1a1-41d4-b362-bfa4200629f5/volumes" Dec 12 16:00:36 crc kubenswrapper[4693]: I1212 16:00:36.356178 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-clst5" Dec 12 16:00:36 crc kubenswrapper[4693]: I1212 16:00:36.358504 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-clst5" Dec 12 16:00:36 crc kubenswrapper[4693]: I1212 16:00:36.789582 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-clst5"] Dec 12 16:00:37 crc kubenswrapper[4693]: I1212 16:00:37.356820 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk" Dec 12 16:00:37 crc kubenswrapper[4693]: I1212 16:00:37.357356 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk" Dec 12 16:00:37 crc kubenswrapper[4693]: I1212 16:00:37.398164 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-clst5" event={"ID":"2dd59d35-d975-49a2-8a23-db068e921965","Type":"ContainerStarted","Data":"74ee2af111942e1565c53ce1a68d68ae3760f2ef7fc85707793b6fb94156ac90"} Dec 12 16:00:37 crc kubenswrapper[4693]: I1212 16:00:37.856989 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk"] Dec 12 16:00:38 crc kubenswrapper[4693]: I1212 16:00:38.357335 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx" Dec 12 16:00:38 crc kubenswrapper[4693]: I1212 16:00:38.357838 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx" Dec 12 16:00:38 crc kubenswrapper[4693]: I1212 16:00:38.410982 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk" event={"ID":"9f0ae906-0fb5-4478-808d-e6b1f5785736","Type":"ContainerStarted","Data":"ac408f44a1229bf772f604f9918b9fb437695cca331df55abb43e4a000247a35"} Dec 12 16:00:38 crc kubenswrapper[4693]: I1212 16:00:38.587106 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx"] Dec 12 16:00:39 crc kubenswrapper[4693]: I1212 16:00:39.356611 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" Dec 12 16:00:39 crc kubenswrapper[4693]: I1212 16:00:39.357528 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" Dec 12 16:00:39 crc kubenswrapper[4693]: I1212 16:00:39.445430 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx" event={"ID":"3581c9cb-4a5e-436c-969e-1a7311dbefc3","Type":"ContainerStarted","Data":"ebc41abd4071cf1059f6a8ea24cc588aca213b0c608fb3d5a4e7d30582d814a2"} Dec 12 16:00:39 crc kubenswrapper[4693]: I1212 16:00:39.590430 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-t9zpr"] Dec 12 16:00:39 crc kubenswrapper[4693]: W1212 16:00:39.598582 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51db3b1a_8b64_47d6_b09c_a8356e855606.slice/crio-4aae9113bd2b2d6521a4604ba73bd26ddf35e76243115d21e0f85735c3a9be54 WatchSource:0}: Error finding container 4aae9113bd2b2d6521a4604ba73bd26ddf35e76243115d21e0f85735c3a9be54: Status 404 returned error can't find the container with id 4aae9113bd2b2d6521a4604ba73bd26ddf35e76243115d21e0f85735c3a9be54 Dec 12 16:00:40 crc kubenswrapper[4693]: I1212 16:00:40.356847 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xbtg7" Dec 12 16:00:40 crc kubenswrapper[4693]: I1212 16:00:40.357743 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xbtg7" Dec 12 16:00:40 crc kubenswrapper[4693]: I1212 16:00:40.461876 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" event={"ID":"51db3b1a-8b64-47d6-b09c-a8356e855606","Type":"ContainerStarted","Data":"4aae9113bd2b2d6521a4604ba73bd26ddf35e76243115d21e0f85735c3a9be54"} Dec 12 16:00:40 crc kubenswrapper[4693]: I1212 16:00:40.819578 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-xbtg7"] Dec 12 16:00:41 crc kubenswrapper[4693]: I1212 16:00:41.483858 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xbtg7" event={"ID":"f10d4e25-5e26-47fa-a4b4-443186694122","Type":"ContainerStarted","Data":"24bbff12e71f8ff5225d643ee1c8486799019d47c7a8ad14293c80956d26ac21"} Dec 12 16:00:43 crc kubenswrapper[4693]: I1212 16:00:43.903613 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c6f2k" Dec 12 16:00:53 crc kubenswrapper[4693]: I1212 16:00:53.614412 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-clst5" event={"ID":"2dd59d35-d975-49a2-8a23-db068e921965","Type":"ContainerStarted","Data":"6a16d680dc98771a650dd9cbe340e9af9eec2abd95bc1173a575f4c1274537bc"} Dec 12 16:00:53 crc kubenswrapper[4693]: I1212 16:00:53.615776 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-clst5" Dec 12 16:00:53 crc kubenswrapper[4693]: I1212 16:00:53.624438 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" Dec 12 16:00:53 crc kubenswrapper[4693]: I1212 16:00:53.625566 4693 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-t9zpr container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.13:8081/healthz\": dial tcp 10.217.0.13:8081: connect: connection refused" start-of-body= Dec 12 16:00:53 crc kubenswrapper[4693]: I1212 16:00:53.625606 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" podUID="51db3b1a-8b64-47d6-b09c-a8356e855606" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.13:8081/healthz\": dial tcp 10.217.0.13:8081: connect: connection refused" Dec 12 16:00:53 crc kubenswrapper[4693]: I1212 16:00:53.638554 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-clst5" podStartSLOduration=19.171773759 podStartE2EDuration="35.638535547s" podCreationTimestamp="2025-12-12 16:00:18 +0000 UTC" firstStartedPulling="2025-12-12 16:00:36.803223768 +0000 UTC m=+863.971863369" lastFinishedPulling="2025-12-12 16:00:53.269985556 +0000 UTC m=+880.438625157" observedRunningTime="2025-12-12 16:00:53.633927881 +0000 UTC m=+880.802567482" watchObservedRunningTime="2025-12-12 16:00:53.638535547 +0000 UTC m=+880.807175148" Dec 12 16:00:53 crc kubenswrapper[4693]: I1212 16:00:53.670561 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" podStartSLOduration=21.960245933 podStartE2EDuration="35.670542456s" podCreationTimestamp="2025-12-12 16:00:18 +0000 UTC" firstStartedPulling="2025-12-12 16:00:39.600205853 +0000 UTC m=+866.768845454" lastFinishedPulling="2025-12-12 16:00:53.310502376 +0000 UTC m=+880.479141977" observedRunningTime="2025-12-12 16:00:53.665770686 +0000 UTC m=+880.834410307" watchObservedRunningTime="2025-12-12 16:00:53.670542456 +0000 UTC m=+880.839182057" Dec 12 16:00:54 crc kubenswrapper[4693]: I1212 16:00:54.632680 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk" event={"ID":"9f0ae906-0fb5-4478-808d-e6b1f5785736","Type":"ContainerStarted","Data":"a1a23edec2727f2c2dbddded07de42852188b613191aa0f2dd283e9bf946981d"} Dec 12 16:00:54 crc kubenswrapper[4693]: I1212 16:00:54.634167 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xbtg7" event={"ID":"f10d4e25-5e26-47fa-a4b4-443186694122","Type":"ContainerStarted","Data":"206794d2dcb0f7a0ed39b12170da59cfd51ff2d991f694367ea64415be725ea2"} Dec 12 16:00:54 crc kubenswrapper[4693]: I1212 16:00:54.636079 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx" event={"ID":"3581c9cb-4a5e-436c-969e-1a7311dbefc3","Type":"ContainerStarted","Data":"d88f9895a2111b063ad56c2b08a6691f39e8b9e4451d28d40002ff4eca14a5e1"} Dec 12 16:00:54 crc kubenswrapper[4693]: I1212 16:00:54.637959 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" event={"ID":"51db3b1a-8b64-47d6-b09c-a8356e855606","Type":"ContainerStarted","Data":"8a4f5b5cfbb2c8eb82534b1eafe9a4665fb6b41f6f8edda17322eafce6ddd19b"} Dec 12 16:00:54 crc kubenswrapper[4693]: I1212 16:00:54.655994 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-rdzsk" podStartSLOduration=21.217598831 podStartE2EDuration="36.655974194s" podCreationTimestamp="2025-12-12 16:00:18 +0000 UTC" firstStartedPulling="2025-12-12 16:00:37.871867216 +0000 UTC m=+865.040506817" lastFinishedPulling="2025-12-12 16:00:53.310242579 +0000 UTC m=+880.478882180" observedRunningTime="2025-12-12 16:00:54.651680287 +0000 UTC m=+881.820319888" watchObservedRunningTime="2025-12-12 16:00:54.655974194 +0000 UTC m=+881.824613795" Dec 12 16:00:54 crc kubenswrapper[4693]: I1212 16:00:54.673373 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bcdc4785-sf5rx" podStartSLOduration=21.986073984 podStartE2EDuration="36.673353706s" podCreationTimestamp="2025-12-12 16:00:18 +0000 UTC" firstStartedPulling="2025-12-12 16:00:38.599646764 +0000 UTC m=+865.768286375" lastFinishedPulling="2025-12-12 16:00:53.286926486 +0000 UTC m=+880.455566097" observedRunningTime="2025-12-12 16:00:54.672638916 +0000 UTC m=+881.841278517" watchObservedRunningTime="2025-12-12 16:00:54.673353706 +0000 UTC m=+881.841993307" Dec 12 16:00:54 crc kubenswrapper[4693]: I1212 16:00:54.689220 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" Dec 12 16:00:54 crc kubenswrapper[4693]: I1212 16:00:54.703532 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xbtg7" podStartSLOduration=24.241127459 podStartE2EDuration="36.703491054s" podCreationTimestamp="2025-12-12 16:00:18 +0000 UTC" firstStartedPulling="2025-12-12 16:00:40.828066676 +0000 UTC m=+867.996706277" lastFinishedPulling="2025-12-12 16:00:53.290430271 +0000 UTC m=+880.459069872" observedRunningTime="2025-12-12 16:00:54.698747726 +0000 UTC m=+881.867387327" watchObservedRunningTime="2025-12-12 16:00:54.703491054 +0000 UTC m=+881.872130655" Dec 12 16:00:59 crc kubenswrapper[4693]: I1212 16:00:59.103824 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-clst5" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.295522 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-zqql2"] Dec 12 16:01:04 crc kubenswrapper[4693]: E1212 16:01:04.296420 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95322c55-e1a1-41d4-b362-bfa4200629f5" containerName="extract-content" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.296441 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="95322c55-e1a1-41d4-b362-bfa4200629f5" containerName="extract-content" Dec 12 16:01:04 crc kubenswrapper[4693]: E1212 16:01:04.296463 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95322c55-e1a1-41d4-b362-bfa4200629f5" containerName="extract-utilities" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.296471 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="95322c55-e1a1-41d4-b362-bfa4200629f5" containerName="extract-utilities" Dec 12 16:01:04 crc kubenswrapper[4693]: E1212 16:01:04.296483 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95322c55-e1a1-41d4-b362-bfa4200629f5" containerName="registry-server" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.296490 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="95322c55-e1a1-41d4-b362-bfa4200629f5" containerName="registry-server" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.296644 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="95322c55-e1a1-41d4-b362-bfa4200629f5" containerName="registry-server" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.297137 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-zqql2" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.302999 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-x4m5g"] Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.303771 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-x4m5g" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.305311 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.305650 4693 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-rknkk" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.306057 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.306462 4693 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-nch7r" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.312433 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-zqql2"] Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.337398 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-x4m5g"] Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.365451 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-dbkxt"] Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.366481 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-dbkxt" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.372120 4693 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-pxpxl" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.372201 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz78c\" (UniqueName: \"kubernetes.io/projected/4f0de0f6-0ea4-4228-b063-815ed26e6cd0-kube-api-access-dz78c\") pod \"cert-manager-5b446d88c5-x4m5g\" (UID: \"4f0de0f6-0ea4-4228-b063-815ed26e6cd0\") " pod="cert-manager/cert-manager-5b446d88c5-x4m5g" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.372299 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72rcs\" (UniqueName: \"kubernetes.io/projected/ca9a63ee-fc2b-4b68-ae4d-c8e4be2da3fb-kube-api-access-72rcs\") pod \"cert-manager-cainjector-7f985d654d-zqql2\" (UID: \"ca9a63ee-fc2b-4b68-ae4d-c8e4be2da3fb\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-zqql2" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.385872 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-dbkxt"] Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.473810 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz78c\" (UniqueName: \"kubernetes.io/projected/4f0de0f6-0ea4-4228-b063-815ed26e6cd0-kube-api-access-dz78c\") pod \"cert-manager-5b446d88c5-x4m5g\" (UID: \"4f0de0f6-0ea4-4228-b063-815ed26e6cd0\") " pod="cert-manager/cert-manager-5b446d88c5-x4m5g" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.473881 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72rcs\" (UniqueName: \"kubernetes.io/projected/ca9a63ee-fc2b-4b68-ae4d-c8e4be2da3fb-kube-api-access-72rcs\") pod \"cert-manager-cainjector-7f985d654d-zqql2\" (UID: \"ca9a63ee-fc2b-4b68-ae4d-c8e4be2da3fb\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-zqql2" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.473954 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwk77\" (UniqueName: \"kubernetes.io/projected/09a2f99b-1398-4e56-ac77-ae6e4d9aaac8-kube-api-access-dwk77\") pod \"cert-manager-webhook-5655c58dd6-dbkxt\" (UID: \"09a2f99b-1398-4e56-ac77-ae6e4d9aaac8\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-dbkxt" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.500229 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz78c\" (UniqueName: \"kubernetes.io/projected/4f0de0f6-0ea4-4228-b063-815ed26e6cd0-kube-api-access-dz78c\") pod \"cert-manager-5b446d88c5-x4m5g\" (UID: \"4f0de0f6-0ea4-4228-b063-815ed26e6cd0\") " pod="cert-manager/cert-manager-5b446d88c5-x4m5g" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.509324 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72rcs\" (UniqueName: \"kubernetes.io/projected/ca9a63ee-fc2b-4b68-ae4d-c8e4be2da3fb-kube-api-access-72rcs\") pod \"cert-manager-cainjector-7f985d654d-zqql2\" (UID: \"ca9a63ee-fc2b-4b68-ae4d-c8e4be2da3fb\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-zqql2" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.575774 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwk77\" (UniqueName: \"kubernetes.io/projected/09a2f99b-1398-4e56-ac77-ae6e4d9aaac8-kube-api-access-dwk77\") pod \"cert-manager-webhook-5655c58dd6-dbkxt\" (UID: \"09a2f99b-1398-4e56-ac77-ae6e4d9aaac8\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-dbkxt" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.591097 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwk77\" (UniqueName: \"kubernetes.io/projected/09a2f99b-1398-4e56-ac77-ae6e4d9aaac8-kube-api-access-dwk77\") pod \"cert-manager-webhook-5655c58dd6-dbkxt\" (UID: \"09a2f99b-1398-4e56-ac77-ae6e4d9aaac8\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-dbkxt" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.619633 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-zqql2" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.630933 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-x4m5g" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.687253 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-dbkxt" Dec 12 16:01:04 crc kubenswrapper[4693]: I1212 16:01:04.976705 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-dbkxt"] Dec 12 16:01:05 crc kubenswrapper[4693]: I1212 16:01:05.088745 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-x4m5g"] Dec 12 16:01:05 crc kubenswrapper[4693]: W1212 16:01:05.094743 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca9a63ee_fc2b_4b68_ae4d_c8e4be2da3fb.slice/crio-e4533d32c5285eedcbcadb847fd9f8db3af298da61698254225602be9f7c93ae WatchSource:0}: Error finding container e4533d32c5285eedcbcadb847fd9f8db3af298da61698254225602be9f7c93ae: Status 404 returned error can't find the container with id e4533d32c5285eedcbcadb847fd9f8db3af298da61698254225602be9f7c93ae Dec 12 16:01:05 crc kubenswrapper[4693]: I1212 16:01:05.097587 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-zqql2"] Dec 12 16:01:05 crc kubenswrapper[4693]: I1212 16:01:05.723550 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-zqql2" event={"ID":"ca9a63ee-fc2b-4b68-ae4d-c8e4be2da3fb","Type":"ContainerStarted","Data":"e4533d32c5285eedcbcadb847fd9f8db3af298da61698254225602be9f7c93ae"} Dec 12 16:01:05 crc kubenswrapper[4693]: I1212 16:01:05.724587 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-dbkxt" event={"ID":"09a2f99b-1398-4e56-ac77-ae6e4d9aaac8","Type":"ContainerStarted","Data":"ad8d3972fbadb1639e27cdc80fb4ab7bd015624dfa09cb6eac13f42b633b0f09"} Dec 12 16:01:05 crc kubenswrapper[4693]: I1212 16:01:05.725406 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-x4m5g" event={"ID":"4f0de0f6-0ea4-4228-b063-815ed26e6cd0","Type":"ContainerStarted","Data":"aab47c6500a3bcc1567e30efe252ccf43166107d826560b12e30e4742d82d744"} Dec 12 16:01:09 crc kubenswrapper[4693]: I1212 16:01:09.758016 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-dbkxt" event={"ID":"09a2f99b-1398-4e56-ac77-ae6e4d9aaac8","Type":"ContainerStarted","Data":"a59ae5199448fa80d58d1375beba452532191f606a7935fe117f954cdf770c9e"} Dec 12 16:01:09 crc kubenswrapper[4693]: I1212 16:01:09.758159 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-dbkxt" Dec 12 16:01:09 crc kubenswrapper[4693]: I1212 16:01:09.759862 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-x4m5g" event={"ID":"4f0de0f6-0ea4-4228-b063-815ed26e6cd0","Type":"ContainerStarted","Data":"c7ad23eddfb2588abd08e16ce7bb4fc1f2765bf65f5b2f570ef359ef50de231e"} Dec 12 16:01:09 crc kubenswrapper[4693]: I1212 16:01:09.762007 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-zqql2" event={"ID":"ca9a63ee-fc2b-4b68-ae4d-c8e4be2da3fb","Type":"ContainerStarted","Data":"e4673a5689efbcb2d447ef461d1d52551442f3d3ad8166261f6f91130ed31b6f"} Dec 12 16:01:09 crc kubenswrapper[4693]: I1212 16:01:09.779718 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-dbkxt" podStartSLOduration=1.326072936 podStartE2EDuration="5.779694322s" podCreationTimestamp="2025-12-12 16:01:04 +0000 UTC" firstStartedPulling="2025-12-12 16:01:04.984262982 +0000 UTC m=+892.152902583" lastFinishedPulling="2025-12-12 16:01:09.437884368 +0000 UTC m=+896.606523969" observedRunningTime="2025-12-12 16:01:09.775342114 +0000 UTC m=+896.943981715" watchObservedRunningTime="2025-12-12 16:01:09.779694322 +0000 UTC m=+896.948333923" Dec 12 16:01:09 crc kubenswrapper[4693]: I1212 16:01:09.790355 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-x4m5g" podStartSLOduration=1.449281413 podStartE2EDuration="5.790340201s" podCreationTimestamp="2025-12-12 16:01:04 +0000 UTC" firstStartedPulling="2025-12-12 16:01:05.100747816 +0000 UTC m=+892.269387417" lastFinishedPulling="2025-12-12 16:01:09.441806604 +0000 UTC m=+896.610446205" observedRunningTime="2025-12-12 16:01:09.788035269 +0000 UTC m=+896.956674870" watchObservedRunningTime="2025-12-12 16:01:09.790340201 +0000 UTC m=+896.958979802" Dec 12 16:01:09 crc kubenswrapper[4693]: I1212 16:01:09.805683 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-zqql2" podStartSLOduration=1.456576162 podStartE2EDuration="5.805662548s" podCreationTimestamp="2025-12-12 16:01:04 +0000 UTC" firstStartedPulling="2025-12-12 16:01:05.09647908 +0000 UTC m=+892.265118681" lastFinishedPulling="2025-12-12 16:01:09.445565456 +0000 UTC m=+896.614205067" observedRunningTime="2025-12-12 16:01:09.80318184 +0000 UTC m=+896.971821441" watchObservedRunningTime="2025-12-12 16:01:09.805662548 +0000 UTC m=+896.974302149" Dec 12 16:01:13 crc kubenswrapper[4693]: I1212 16:01:13.119245 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vccdw"] Dec 12 16:01:13 crc kubenswrapper[4693]: I1212 16:01:13.127263 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vccdw"] Dec 12 16:01:13 crc kubenswrapper[4693]: I1212 16:01:13.127474 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vccdw" Dec 12 16:01:13 crc kubenswrapper[4693]: I1212 16:01:13.204561 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82956395-7bc8-446b-b915-5b74d3faa72c-utilities\") pod \"certified-operators-vccdw\" (UID: \"82956395-7bc8-446b-b915-5b74d3faa72c\") " pod="openshift-marketplace/certified-operators-vccdw" Dec 12 16:01:13 crc kubenswrapper[4693]: I1212 16:01:13.204620 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82956395-7bc8-446b-b915-5b74d3faa72c-catalog-content\") pod \"certified-operators-vccdw\" (UID: \"82956395-7bc8-446b-b915-5b74d3faa72c\") " pod="openshift-marketplace/certified-operators-vccdw" Dec 12 16:01:13 crc kubenswrapper[4693]: I1212 16:01:13.204690 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwd98\" (UniqueName: \"kubernetes.io/projected/82956395-7bc8-446b-b915-5b74d3faa72c-kube-api-access-xwd98\") pod \"certified-operators-vccdw\" (UID: \"82956395-7bc8-446b-b915-5b74d3faa72c\") " pod="openshift-marketplace/certified-operators-vccdw" Dec 12 16:01:13 crc kubenswrapper[4693]: I1212 16:01:13.306250 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82956395-7bc8-446b-b915-5b74d3faa72c-utilities\") pod \"certified-operators-vccdw\" (UID: \"82956395-7bc8-446b-b915-5b74d3faa72c\") " pod="openshift-marketplace/certified-operators-vccdw" Dec 12 16:01:13 crc kubenswrapper[4693]: I1212 16:01:13.306668 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82956395-7bc8-446b-b915-5b74d3faa72c-catalog-content\") pod \"certified-operators-vccdw\" (UID: \"82956395-7bc8-446b-b915-5b74d3faa72c\") " pod="openshift-marketplace/certified-operators-vccdw" Dec 12 16:01:13 crc kubenswrapper[4693]: I1212 16:01:13.306760 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwd98\" (UniqueName: \"kubernetes.io/projected/82956395-7bc8-446b-b915-5b74d3faa72c-kube-api-access-xwd98\") pod \"certified-operators-vccdw\" (UID: \"82956395-7bc8-446b-b915-5b74d3faa72c\") " pod="openshift-marketplace/certified-operators-vccdw" Dec 12 16:01:13 crc kubenswrapper[4693]: I1212 16:01:13.307648 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82956395-7bc8-446b-b915-5b74d3faa72c-utilities\") pod \"certified-operators-vccdw\" (UID: \"82956395-7bc8-446b-b915-5b74d3faa72c\") " pod="openshift-marketplace/certified-operators-vccdw" Dec 12 16:01:13 crc kubenswrapper[4693]: I1212 16:01:13.307931 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82956395-7bc8-446b-b915-5b74d3faa72c-catalog-content\") pod \"certified-operators-vccdw\" (UID: \"82956395-7bc8-446b-b915-5b74d3faa72c\") " pod="openshift-marketplace/certified-operators-vccdw" Dec 12 16:01:13 crc kubenswrapper[4693]: I1212 16:01:13.328502 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwd98\" (UniqueName: \"kubernetes.io/projected/82956395-7bc8-446b-b915-5b74d3faa72c-kube-api-access-xwd98\") pod \"certified-operators-vccdw\" (UID: \"82956395-7bc8-446b-b915-5b74d3faa72c\") " pod="openshift-marketplace/certified-operators-vccdw" Dec 12 16:01:13 crc kubenswrapper[4693]: I1212 16:01:13.448554 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vccdw" Dec 12 16:01:13 crc kubenswrapper[4693]: I1212 16:01:13.972073 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vccdw"] Dec 12 16:01:14 crc kubenswrapper[4693]: I1212 16:01:14.691135 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-dbkxt" Dec 12 16:01:14 crc kubenswrapper[4693]: I1212 16:01:14.794403 4693 generic.go:334] "Generic (PLEG): container finished" podID="82956395-7bc8-446b-b915-5b74d3faa72c" containerID="5db07a65d19ba9550cf4851204e8005a9906000f661c4e1aa3bc1700909b1b9e" exitCode=0 Dec 12 16:01:14 crc kubenswrapper[4693]: I1212 16:01:14.794441 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vccdw" event={"ID":"82956395-7bc8-446b-b915-5b74d3faa72c","Type":"ContainerDied","Data":"5db07a65d19ba9550cf4851204e8005a9906000f661c4e1aa3bc1700909b1b9e"} Dec 12 16:01:14 crc kubenswrapper[4693]: I1212 16:01:14.794468 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vccdw" event={"ID":"82956395-7bc8-446b-b915-5b74d3faa72c","Type":"ContainerStarted","Data":"a7265de3f42ae17ddb4e9026a64996b56948db73a2b57ac28bd3010ebefaa362"} Dec 12 16:01:15 crc kubenswrapper[4693]: I1212 16:01:15.803770 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vccdw" event={"ID":"82956395-7bc8-446b-b915-5b74d3faa72c","Type":"ContainerStarted","Data":"846989bc064778d0a045b73672185ec5d9d03650493158de260a0a665c181fe4"} Dec 12 16:01:16 crc kubenswrapper[4693]: I1212 16:01:16.812249 4693 generic.go:334] "Generic (PLEG): container finished" podID="82956395-7bc8-446b-b915-5b74d3faa72c" containerID="846989bc064778d0a045b73672185ec5d9d03650493158de260a0a665c181fe4" exitCode=0 Dec 12 16:01:16 crc kubenswrapper[4693]: I1212 16:01:16.812310 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vccdw" event={"ID":"82956395-7bc8-446b-b915-5b74d3faa72c","Type":"ContainerDied","Data":"846989bc064778d0a045b73672185ec5d9d03650493158de260a0a665c181fe4"} Dec 12 16:01:17 crc kubenswrapper[4693]: I1212 16:01:17.823514 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vccdw" event={"ID":"82956395-7bc8-446b-b915-5b74d3faa72c","Type":"ContainerStarted","Data":"a14b2170d2c319cb662125c602069698b45264c524377672f31bcb47ae4454b7"} Dec 12 16:01:17 crc kubenswrapper[4693]: I1212 16:01:17.848440 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vccdw" podStartSLOduration=2.157077338 podStartE2EDuration="4.848422583s" podCreationTimestamp="2025-12-12 16:01:13 +0000 UTC" firstStartedPulling="2025-12-12 16:01:14.796958906 +0000 UTC m=+901.965598507" lastFinishedPulling="2025-12-12 16:01:17.488304151 +0000 UTC m=+904.656943752" observedRunningTime="2025-12-12 16:01:17.844285791 +0000 UTC m=+905.012925392" watchObservedRunningTime="2025-12-12 16:01:17.848422583 +0000 UTC m=+905.017062184" Dec 12 16:01:23 crc kubenswrapper[4693]: I1212 16:01:23.448949 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vccdw" Dec 12 16:01:23 crc kubenswrapper[4693]: I1212 16:01:23.449488 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vccdw" Dec 12 16:01:23 crc kubenswrapper[4693]: I1212 16:01:23.493969 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vccdw" Dec 12 16:01:23 crc kubenswrapper[4693]: I1212 16:01:23.900686 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vccdw" Dec 12 16:01:23 crc kubenswrapper[4693]: I1212 16:01:23.948345 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vccdw"] Dec 12 16:01:25 crc kubenswrapper[4693]: I1212 16:01:25.868582 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vccdw" podUID="82956395-7bc8-446b-b915-5b74d3faa72c" containerName="registry-server" containerID="cri-o://a14b2170d2c319cb662125c602069698b45264c524377672f31bcb47ae4454b7" gracePeriod=2 Dec 12 16:01:29 crc kubenswrapper[4693]: I1212 16:01:29.903588 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vccdw_82956395-7bc8-446b-b915-5b74d3faa72c/registry-server/0.log" Dec 12 16:01:29 crc kubenswrapper[4693]: I1212 16:01:29.905081 4693 generic.go:334] "Generic (PLEG): container finished" podID="82956395-7bc8-446b-b915-5b74d3faa72c" containerID="a14b2170d2c319cb662125c602069698b45264c524377672f31bcb47ae4454b7" exitCode=137 Dec 12 16:01:29 crc kubenswrapper[4693]: I1212 16:01:29.905138 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vccdw" event={"ID":"82956395-7bc8-446b-b915-5b74d3faa72c","Type":"ContainerDied","Data":"a14b2170d2c319cb662125c602069698b45264c524377672f31bcb47ae4454b7"} Dec 12 16:01:29 crc kubenswrapper[4693]: I1212 16:01:29.946213 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vccdw_82956395-7bc8-446b-b915-5b74d3faa72c/registry-server/0.log" Dec 12 16:01:29 crc kubenswrapper[4693]: I1212 16:01:29.947124 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vccdw" Dec 12 16:01:29 crc kubenswrapper[4693]: I1212 16:01:29.990119 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwd98\" (UniqueName: \"kubernetes.io/projected/82956395-7bc8-446b-b915-5b74d3faa72c-kube-api-access-xwd98\") pod \"82956395-7bc8-446b-b915-5b74d3faa72c\" (UID: \"82956395-7bc8-446b-b915-5b74d3faa72c\") " Dec 12 16:01:29 crc kubenswrapper[4693]: I1212 16:01:29.990175 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82956395-7bc8-446b-b915-5b74d3faa72c-catalog-content\") pod \"82956395-7bc8-446b-b915-5b74d3faa72c\" (UID: \"82956395-7bc8-446b-b915-5b74d3faa72c\") " Dec 12 16:01:29 crc kubenswrapper[4693]: I1212 16:01:29.990220 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82956395-7bc8-446b-b915-5b74d3faa72c-utilities\") pod \"82956395-7bc8-446b-b915-5b74d3faa72c\" (UID: \"82956395-7bc8-446b-b915-5b74d3faa72c\") " Dec 12 16:01:29 crc kubenswrapper[4693]: I1212 16:01:29.991309 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82956395-7bc8-446b-b915-5b74d3faa72c-utilities" (OuterVolumeSpecName: "utilities") pod "82956395-7bc8-446b-b915-5b74d3faa72c" (UID: "82956395-7bc8-446b-b915-5b74d3faa72c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:01:29 crc kubenswrapper[4693]: I1212 16:01:29.991652 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82956395-7bc8-446b-b915-5b74d3faa72c-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:01:30 crc kubenswrapper[4693]: I1212 16:01:30.004531 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82956395-7bc8-446b-b915-5b74d3faa72c-kube-api-access-xwd98" (OuterVolumeSpecName: "kube-api-access-xwd98") pod "82956395-7bc8-446b-b915-5b74d3faa72c" (UID: "82956395-7bc8-446b-b915-5b74d3faa72c"). InnerVolumeSpecName "kube-api-access-xwd98". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:01:30 crc kubenswrapper[4693]: I1212 16:01:30.057561 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82956395-7bc8-446b-b915-5b74d3faa72c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82956395-7bc8-446b-b915-5b74d3faa72c" (UID: "82956395-7bc8-446b-b915-5b74d3faa72c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:01:30 crc kubenswrapper[4693]: I1212 16:01:30.092454 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwd98\" (UniqueName: \"kubernetes.io/projected/82956395-7bc8-446b-b915-5b74d3faa72c-kube-api-access-xwd98\") on node \"crc\" DevicePath \"\"" Dec 12 16:01:30 crc kubenswrapper[4693]: I1212 16:01:30.092490 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82956395-7bc8-446b-b915-5b74d3faa72c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:01:30 crc kubenswrapper[4693]: I1212 16:01:30.912488 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vccdw_82956395-7bc8-446b-b915-5b74d3faa72c/registry-server/0.log" Dec 12 16:01:30 crc kubenswrapper[4693]: I1212 16:01:30.913553 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vccdw" event={"ID":"82956395-7bc8-446b-b915-5b74d3faa72c","Type":"ContainerDied","Data":"a7265de3f42ae17ddb4e9026a64996b56948db73a2b57ac28bd3010ebefaa362"} Dec 12 16:01:30 crc kubenswrapper[4693]: I1212 16:01:30.913601 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vccdw" Dec 12 16:01:30 crc kubenswrapper[4693]: I1212 16:01:30.913604 4693 scope.go:117] "RemoveContainer" containerID="a14b2170d2c319cb662125c602069698b45264c524377672f31bcb47ae4454b7" Dec 12 16:01:30 crc kubenswrapper[4693]: I1212 16:01:30.931440 4693 scope.go:117] "RemoveContainer" containerID="846989bc064778d0a045b73672185ec5d9d03650493158de260a0a665c181fe4" Dec 12 16:01:30 crc kubenswrapper[4693]: I1212 16:01:30.941378 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vccdw"] Dec 12 16:01:30 crc kubenswrapper[4693]: I1212 16:01:30.948690 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vccdw"] Dec 12 16:01:30 crc kubenswrapper[4693]: I1212 16:01:30.962933 4693 scope.go:117] "RemoveContainer" containerID="5db07a65d19ba9550cf4851204e8005a9906000f661c4e1aa3bc1700909b1b9e" Dec 12 16:01:31 crc kubenswrapper[4693]: I1212 16:01:31.366308 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82956395-7bc8-446b-b915-5b74d3faa72c" path="/var/lib/kubelet/pods/82956395-7bc8-446b-b915-5b74d3faa72c/volumes" Dec 12 16:01:35 crc kubenswrapper[4693]: I1212 16:01:35.869340 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8rm26"] Dec 12 16:01:35 crc kubenswrapper[4693]: E1212 16:01:35.870017 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82956395-7bc8-446b-b915-5b74d3faa72c" containerName="extract-content" Dec 12 16:01:35 crc kubenswrapper[4693]: I1212 16:01:35.870032 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="82956395-7bc8-446b-b915-5b74d3faa72c" containerName="extract-content" Dec 12 16:01:35 crc kubenswrapper[4693]: E1212 16:01:35.870051 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82956395-7bc8-446b-b915-5b74d3faa72c" containerName="registry-server" Dec 12 16:01:35 crc kubenswrapper[4693]: I1212 16:01:35.870059 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="82956395-7bc8-446b-b915-5b74d3faa72c" containerName="registry-server" Dec 12 16:01:35 crc kubenswrapper[4693]: E1212 16:01:35.870072 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82956395-7bc8-446b-b915-5b74d3faa72c" containerName="extract-utilities" Dec 12 16:01:35 crc kubenswrapper[4693]: I1212 16:01:35.870080 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="82956395-7bc8-446b-b915-5b74d3faa72c" containerName="extract-utilities" Dec 12 16:01:35 crc kubenswrapper[4693]: I1212 16:01:35.870212 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="82956395-7bc8-446b-b915-5b74d3faa72c" containerName="registry-server" Dec 12 16:01:35 crc kubenswrapper[4693]: I1212 16:01:35.871254 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8rm26" Dec 12 16:01:35 crc kubenswrapper[4693]: I1212 16:01:35.889799 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8rm26"] Dec 12 16:01:35 crc kubenswrapper[4693]: I1212 16:01:35.973160 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hbs6\" (UniqueName: \"kubernetes.io/projected/cd7423ac-012d-467a-864a-18108b91520f-kube-api-access-9hbs6\") pod \"redhat-marketplace-8rm26\" (UID: \"cd7423ac-012d-467a-864a-18108b91520f\") " pod="openshift-marketplace/redhat-marketplace-8rm26" Dec 12 16:01:35 crc kubenswrapper[4693]: I1212 16:01:35.973314 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd7423ac-012d-467a-864a-18108b91520f-utilities\") pod \"redhat-marketplace-8rm26\" (UID: \"cd7423ac-012d-467a-864a-18108b91520f\") " pod="openshift-marketplace/redhat-marketplace-8rm26" Dec 12 16:01:35 crc kubenswrapper[4693]: I1212 16:01:35.973360 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd7423ac-012d-467a-864a-18108b91520f-catalog-content\") pod \"redhat-marketplace-8rm26\" (UID: \"cd7423ac-012d-467a-864a-18108b91520f\") " pod="openshift-marketplace/redhat-marketplace-8rm26" Dec 12 16:01:36 crc kubenswrapper[4693]: I1212 16:01:36.074484 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd7423ac-012d-467a-864a-18108b91520f-utilities\") pod \"redhat-marketplace-8rm26\" (UID: \"cd7423ac-012d-467a-864a-18108b91520f\") " pod="openshift-marketplace/redhat-marketplace-8rm26" Dec 12 16:01:36 crc kubenswrapper[4693]: I1212 16:01:36.074567 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd7423ac-012d-467a-864a-18108b91520f-catalog-content\") pod \"redhat-marketplace-8rm26\" (UID: \"cd7423ac-012d-467a-864a-18108b91520f\") " pod="openshift-marketplace/redhat-marketplace-8rm26" Dec 12 16:01:36 crc kubenswrapper[4693]: I1212 16:01:36.074626 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hbs6\" (UniqueName: \"kubernetes.io/projected/cd7423ac-012d-467a-864a-18108b91520f-kube-api-access-9hbs6\") pod \"redhat-marketplace-8rm26\" (UID: \"cd7423ac-012d-467a-864a-18108b91520f\") " pod="openshift-marketplace/redhat-marketplace-8rm26" Dec 12 16:01:36 crc kubenswrapper[4693]: I1212 16:01:36.075136 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd7423ac-012d-467a-864a-18108b91520f-utilities\") pod \"redhat-marketplace-8rm26\" (UID: \"cd7423ac-012d-467a-864a-18108b91520f\") " pod="openshift-marketplace/redhat-marketplace-8rm26" Dec 12 16:01:36 crc kubenswrapper[4693]: I1212 16:01:36.075206 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd7423ac-012d-467a-864a-18108b91520f-catalog-content\") pod \"redhat-marketplace-8rm26\" (UID: \"cd7423ac-012d-467a-864a-18108b91520f\") " pod="openshift-marketplace/redhat-marketplace-8rm26" Dec 12 16:01:36 crc kubenswrapper[4693]: I1212 16:01:36.098056 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hbs6\" (UniqueName: \"kubernetes.io/projected/cd7423ac-012d-467a-864a-18108b91520f-kube-api-access-9hbs6\") pod \"redhat-marketplace-8rm26\" (UID: \"cd7423ac-012d-467a-864a-18108b91520f\") " pod="openshift-marketplace/redhat-marketplace-8rm26" Dec 12 16:01:36 crc kubenswrapper[4693]: I1212 16:01:36.188219 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8rm26" Dec 12 16:01:36 crc kubenswrapper[4693]: I1212 16:01:36.601894 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8rm26"] Dec 12 16:01:36 crc kubenswrapper[4693]: I1212 16:01:36.955575 4693 generic.go:334] "Generic (PLEG): container finished" podID="cd7423ac-012d-467a-864a-18108b91520f" containerID="b9210d2b8655dacf75934f55fdc9b5dd460f166a3fabe567bf3df3c49c5d52c0" exitCode=0 Dec 12 16:01:36 crc kubenswrapper[4693]: I1212 16:01:36.955618 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rm26" event={"ID":"cd7423ac-012d-467a-864a-18108b91520f","Type":"ContainerDied","Data":"b9210d2b8655dacf75934f55fdc9b5dd460f166a3fabe567bf3df3c49c5d52c0"} Dec 12 16:01:36 crc kubenswrapper[4693]: I1212 16:01:36.955661 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rm26" event={"ID":"cd7423ac-012d-467a-864a-18108b91520f","Type":"ContainerStarted","Data":"3b91175e4741eadd2e6837fbb88e7557bf117b5ac9ef5f9dfd0b1cf1d307625d"} Dec 12 16:01:37 crc kubenswrapper[4693]: I1212 16:01:37.965660 4693 generic.go:334] "Generic (PLEG): container finished" podID="cd7423ac-012d-467a-864a-18108b91520f" containerID="b5018d9542d8dcb3f8efb83e5411b587de52ae3b4eaa5ec9225aee26d0fbbfa8" exitCode=0 Dec 12 16:01:37 crc kubenswrapper[4693]: I1212 16:01:37.965704 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rm26" event={"ID":"cd7423ac-012d-467a-864a-18108b91520f","Type":"ContainerDied","Data":"b5018d9542d8dcb3f8efb83e5411b587de52ae3b4eaa5ec9225aee26d0fbbfa8"} Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.496914 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t"] Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.498415 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t" Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.504457 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.508843 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t"] Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.614188 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s45ph\" (UniqueName: \"kubernetes.io/projected/ed544ab2-ef67-4adc-ab9f-73e92db27c87-kube-api-access-s45ph\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t\" (UID: \"ed544ab2-ef67-4adc-ab9f-73e92db27c87\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t" Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.614269 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed544ab2-ef67-4adc-ab9f-73e92db27c87-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t\" (UID: \"ed544ab2-ef67-4adc-ab9f-73e92db27c87\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t" Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.614325 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed544ab2-ef67-4adc-ab9f-73e92db27c87-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t\" (UID: \"ed544ab2-ef67-4adc-ab9f-73e92db27c87\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t" Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.689484 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r"] Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.691613 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r" Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.711597 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r"] Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.715993 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s45ph\" (UniqueName: \"kubernetes.io/projected/ed544ab2-ef67-4adc-ab9f-73e92db27c87-kube-api-access-s45ph\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t\" (UID: \"ed544ab2-ef67-4adc-ab9f-73e92db27c87\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t" Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.716186 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed544ab2-ef67-4adc-ab9f-73e92db27c87-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t\" (UID: \"ed544ab2-ef67-4adc-ab9f-73e92db27c87\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t" Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.716287 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed544ab2-ef67-4adc-ab9f-73e92db27c87-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t\" (UID: \"ed544ab2-ef67-4adc-ab9f-73e92db27c87\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t" Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.716897 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed544ab2-ef67-4adc-ab9f-73e92db27c87-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t\" (UID: \"ed544ab2-ef67-4adc-ab9f-73e92db27c87\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t" Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.717508 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed544ab2-ef67-4adc-ab9f-73e92db27c87-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t\" (UID: \"ed544ab2-ef67-4adc-ab9f-73e92db27c87\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t" Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.768070 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s45ph\" (UniqueName: \"kubernetes.io/projected/ed544ab2-ef67-4adc-ab9f-73e92db27c87-kube-api-access-s45ph\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t\" (UID: \"ed544ab2-ef67-4adc-ab9f-73e92db27c87\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t" Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.819993 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12a41223-aa3e-436b-917a-bf7ccf4d5587-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r\" (UID: \"12a41223-aa3e-436b-917a-bf7ccf4d5587\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r" Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.820089 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr92x\" (UniqueName: \"kubernetes.io/projected/12a41223-aa3e-436b-917a-bf7ccf4d5587-kube-api-access-lr92x\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r\" (UID: \"12a41223-aa3e-436b-917a-bf7ccf4d5587\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r" Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.820117 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12a41223-aa3e-436b-917a-bf7ccf4d5587-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r\" (UID: \"12a41223-aa3e-436b-917a-bf7ccf4d5587\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r" Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.829774 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t" Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.921110 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12a41223-aa3e-436b-917a-bf7ccf4d5587-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r\" (UID: \"12a41223-aa3e-436b-917a-bf7ccf4d5587\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r" Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.921224 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12a41223-aa3e-436b-917a-bf7ccf4d5587-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r\" (UID: \"12a41223-aa3e-436b-917a-bf7ccf4d5587\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r" Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.921333 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr92x\" (UniqueName: \"kubernetes.io/projected/12a41223-aa3e-436b-917a-bf7ccf4d5587-kube-api-access-lr92x\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r\" (UID: \"12a41223-aa3e-436b-917a-bf7ccf4d5587\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r" Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.921843 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12a41223-aa3e-436b-917a-bf7ccf4d5587-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r\" (UID: \"12a41223-aa3e-436b-917a-bf7ccf4d5587\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r" Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.922122 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12a41223-aa3e-436b-917a-bf7ccf4d5587-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r\" (UID: \"12a41223-aa3e-436b-917a-bf7ccf4d5587\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r" Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.945882 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr92x\" (UniqueName: \"kubernetes.io/projected/12a41223-aa3e-436b-917a-bf7ccf4d5587-kube-api-access-lr92x\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r\" (UID: \"12a41223-aa3e-436b-917a-bf7ccf4d5587\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r" Dec 12 16:01:38 crc kubenswrapper[4693]: I1212 16:01:38.977976 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rm26" event={"ID":"cd7423ac-012d-467a-864a-18108b91520f","Type":"ContainerStarted","Data":"c577febc4b3ba4c3bad408950c893ebc6c63d13bc989d655be29b4dc450902b2"} Dec 12 16:01:39 crc kubenswrapper[4693]: I1212 16:01:39.003886 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8rm26" podStartSLOduration=2.478448378 podStartE2EDuration="4.003868787s" podCreationTimestamp="2025-12-12 16:01:35 +0000 UTC" firstStartedPulling="2025-12-12 16:01:36.957716528 +0000 UTC m=+924.126356129" lastFinishedPulling="2025-12-12 16:01:38.483136937 +0000 UTC m=+925.651776538" observedRunningTime="2025-12-12 16:01:38.999879079 +0000 UTC m=+926.168518690" watchObservedRunningTime="2025-12-12 16:01:39.003868787 +0000 UTC m=+926.172508388" Dec 12 16:01:39 crc kubenswrapper[4693]: I1212 16:01:39.006238 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r" Dec 12 16:01:39 crc kubenswrapper[4693]: I1212 16:01:39.119684 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t"] Dec 12 16:01:39 crc kubenswrapper[4693]: W1212 16:01:39.123633 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded544ab2_ef67_4adc_ab9f_73e92db27c87.slice/crio-806182aab85c40960781eb093778a5d409efe63b119d02083f4e7c989966d47e WatchSource:0}: Error finding container 806182aab85c40960781eb093778a5d409efe63b119d02083f4e7c989966d47e: Status 404 returned error can't find the container with id 806182aab85c40960781eb093778a5d409efe63b119d02083f4e7c989966d47e Dec 12 16:01:39 crc kubenswrapper[4693]: I1212 16:01:39.450348 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r"] Dec 12 16:01:39 crc kubenswrapper[4693]: W1212 16:01:39.455442 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12a41223_aa3e_436b_917a_bf7ccf4d5587.slice/crio-ecfa40c0def92136cdb7f297c4dc7ec45dc33818a4543bbfba5c44a40dbf9a89 WatchSource:0}: Error finding container ecfa40c0def92136cdb7f297c4dc7ec45dc33818a4543bbfba5c44a40dbf9a89: Status 404 returned error can't find the container with id ecfa40c0def92136cdb7f297c4dc7ec45dc33818a4543bbfba5c44a40dbf9a89 Dec 12 16:01:39 crc kubenswrapper[4693]: I1212 16:01:39.984178 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r" event={"ID":"12a41223-aa3e-436b-917a-bf7ccf4d5587","Type":"ContainerStarted","Data":"ecfa40c0def92136cdb7f297c4dc7ec45dc33818a4543bbfba5c44a40dbf9a89"} Dec 12 16:01:39 crc kubenswrapper[4693]: I1212 16:01:39.985067 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t" event={"ID":"ed544ab2-ef67-4adc-ab9f-73e92db27c87","Type":"ContainerStarted","Data":"806182aab85c40960781eb093778a5d409efe63b119d02083f4e7c989966d47e"} Dec 12 16:01:40 crc kubenswrapper[4693]: I1212 16:01:40.992597 4693 generic.go:334] "Generic (PLEG): container finished" podID="ed544ab2-ef67-4adc-ab9f-73e92db27c87" containerID="00b8a867c8efcf3ffd1d8855b1b61bf50ba96631e221c19da9d7cc240b5acf33" exitCode=0 Dec 12 16:01:40 crc kubenswrapper[4693]: I1212 16:01:40.992638 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t" event={"ID":"ed544ab2-ef67-4adc-ab9f-73e92db27c87","Type":"ContainerDied","Data":"00b8a867c8efcf3ffd1d8855b1b61bf50ba96631e221c19da9d7cc240b5acf33"} Dec 12 16:01:40 crc kubenswrapper[4693]: I1212 16:01:40.994476 4693 generic.go:334] "Generic (PLEG): container finished" podID="12a41223-aa3e-436b-917a-bf7ccf4d5587" containerID="14a9113bebc0c8dcf7d3e5df87b0292d267590a90155547a78db88bfee854ba7" exitCode=0 Dec 12 16:01:40 crc kubenswrapper[4693]: I1212 16:01:40.994511 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r" event={"ID":"12a41223-aa3e-436b-917a-bf7ccf4d5587","Type":"ContainerDied","Data":"14a9113bebc0c8dcf7d3e5df87b0292d267590a90155547a78db88bfee854ba7"} Dec 12 16:01:43 crc kubenswrapper[4693]: I1212 16:01:43.009029 4693 generic.go:334] "Generic (PLEG): container finished" podID="12a41223-aa3e-436b-917a-bf7ccf4d5587" containerID="c72c5f3b7f2739aa26135027c4c71d1e43bef2c6747f502d13e066863a946f2f" exitCode=0 Dec 12 16:01:43 crc kubenswrapper[4693]: I1212 16:01:43.009122 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r" event={"ID":"12a41223-aa3e-436b-917a-bf7ccf4d5587","Type":"ContainerDied","Data":"c72c5f3b7f2739aa26135027c4c71d1e43bef2c6747f502d13e066863a946f2f"} Dec 12 16:01:43 crc kubenswrapper[4693]: E1212 16:01:43.322759 4693 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12a41223_aa3e_436b_917a_bf7ccf4d5587.slice/crio-4c82836816530bd5cace8a889aa845a60de64d6c3c74e030797b4c8b2d691a59.scope\": RecentStats: unable to find data in memory cache]" Dec 12 16:01:44 crc kubenswrapper[4693]: I1212 16:01:44.016810 4693 generic.go:334] "Generic (PLEG): container finished" podID="12a41223-aa3e-436b-917a-bf7ccf4d5587" containerID="4c82836816530bd5cace8a889aa845a60de64d6c3c74e030797b4c8b2d691a59" exitCode=0 Dec 12 16:01:44 crc kubenswrapper[4693]: I1212 16:01:44.017134 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r" event={"ID":"12a41223-aa3e-436b-917a-bf7ccf4d5587","Type":"ContainerDied","Data":"4c82836816530bd5cace8a889aa845a60de64d6c3c74e030797b4c8b2d691a59"} Dec 12 16:01:45 crc kubenswrapper[4693]: I1212 16:01:45.024782 4693 generic.go:334] "Generic (PLEG): container finished" podID="ed544ab2-ef67-4adc-ab9f-73e92db27c87" containerID="85ed748e9d6267b226f2ab977b57756d0c59b363a9116d9a95455654b0d9d789" exitCode=0 Dec 12 16:01:45 crc kubenswrapper[4693]: I1212 16:01:45.025031 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t" event={"ID":"ed544ab2-ef67-4adc-ab9f-73e92db27c87","Type":"ContainerDied","Data":"85ed748e9d6267b226f2ab977b57756d0c59b363a9116d9a95455654b0d9d789"} Dec 12 16:01:45 crc kubenswrapper[4693]: I1212 16:01:45.283430 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r" Dec 12 16:01:45 crc kubenswrapper[4693]: I1212 16:01:45.313009 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12a41223-aa3e-436b-917a-bf7ccf4d5587-bundle\") pod \"12a41223-aa3e-436b-917a-bf7ccf4d5587\" (UID: \"12a41223-aa3e-436b-917a-bf7ccf4d5587\") " Dec 12 16:01:45 crc kubenswrapper[4693]: I1212 16:01:45.313122 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr92x\" (UniqueName: \"kubernetes.io/projected/12a41223-aa3e-436b-917a-bf7ccf4d5587-kube-api-access-lr92x\") pod \"12a41223-aa3e-436b-917a-bf7ccf4d5587\" (UID: \"12a41223-aa3e-436b-917a-bf7ccf4d5587\") " Dec 12 16:01:45 crc kubenswrapper[4693]: I1212 16:01:45.313148 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12a41223-aa3e-436b-917a-bf7ccf4d5587-util\") pod \"12a41223-aa3e-436b-917a-bf7ccf4d5587\" (UID: \"12a41223-aa3e-436b-917a-bf7ccf4d5587\") " Dec 12 16:01:45 crc kubenswrapper[4693]: I1212 16:01:45.314734 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12a41223-aa3e-436b-917a-bf7ccf4d5587-bundle" (OuterVolumeSpecName: "bundle") pod "12a41223-aa3e-436b-917a-bf7ccf4d5587" (UID: "12a41223-aa3e-436b-917a-bf7ccf4d5587"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:01:45 crc kubenswrapper[4693]: I1212 16:01:45.319037 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12a41223-aa3e-436b-917a-bf7ccf4d5587-kube-api-access-lr92x" (OuterVolumeSpecName: "kube-api-access-lr92x") pod "12a41223-aa3e-436b-917a-bf7ccf4d5587" (UID: "12a41223-aa3e-436b-917a-bf7ccf4d5587"). InnerVolumeSpecName "kube-api-access-lr92x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:01:45 crc kubenswrapper[4693]: I1212 16:01:45.329929 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12a41223-aa3e-436b-917a-bf7ccf4d5587-util" (OuterVolumeSpecName: "util") pod "12a41223-aa3e-436b-917a-bf7ccf4d5587" (UID: "12a41223-aa3e-436b-917a-bf7ccf4d5587"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:01:45 crc kubenswrapper[4693]: I1212 16:01:45.419131 4693 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12a41223-aa3e-436b-917a-bf7ccf4d5587-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:01:45 crc kubenswrapper[4693]: I1212 16:01:45.419779 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr92x\" (UniqueName: \"kubernetes.io/projected/12a41223-aa3e-436b-917a-bf7ccf4d5587-kube-api-access-lr92x\") on node \"crc\" DevicePath \"\"" Dec 12 16:01:45 crc kubenswrapper[4693]: I1212 16:01:45.419796 4693 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12a41223-aa3e-436b-917a-bf7ccf4d5587-util\") on node \"crc\" DevicePath \"\"" Dec 12 16:01:46 crc kubenswrapper[4693]: I1212 16:01:46.034236 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r" Dec 12 16:01:46 crc kubenswrapper[4693]: I1212 16:01:46.034224 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dkj4r" event={"ID":"12a41223-aa3e-436b-917a-bf7ccf4d5587","Type":"ContainerDied","Data":"ecfa40c0def92136cdb7f297c4dc7ec45dc33818a4543bbfba5c44a40dbf9a89"} Dec 12 16:01:46 crc kubenswrapper[4693]: I1212 16:01:46.034388 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecfa40c0def92136cdb7f297c4dc7ec45dc33818a4543bbfba5c44a40dbf9a89" Dec 12 16:01:46 crc kubenswrapper[4693]: I1212 16:01:46.036704 4693 generic.go:334] "Generic (PLEG): container finished" podID="ed544ab2-ef67-4adc-ab9f-73e92db27c87" containerID="799ac553d01abff1b8d2d7cdf204b44c5f6835e985b1b16130dd00ea9190a3e7" exitCode=0 Dec 12 16:01:46 crc kubenswrapper[4693]: I1212 16:01:46.036741 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t" event={"ID":"ed544ab2-ef67-4adc-ab9f-73e92db27c87","Type":"ContainerDied","Data":"799ac553d01abff1b8d2d7cdf204b44c5f6835e985b1b16130dd00ea9190a3e7"} Dec 12 16:01:46 crc kubenswrapper[4693]: I1212 16:01:46.189102 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8rm26" Dec 12 16:01:46 crc kubenswrapper[4693]: I1212 16:01:46.189169 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8rm26" Dec 12 16:01:46 crc kubenswrapper[4693]: I1212 16:01:46.229927 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8rm26" Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.088798 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8rm26" Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.453773 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dddlw"] Dec 12 16:01:47 crc kubenswrapper[4693]: E1212 16:01:47.454143 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a41223-aa3e-436b-917a-bf7ccf4d5587" containerName="pull" Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.454155 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a41223-aa3e-436b-917a-bf7ccf4d5587" containerName="pull" Dec 12 16:01:47 crc kubenswrapper[4693]: E1212 16:01:47.454167 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a41223-aa3e-436b-917a-bf7ccf4d5587" containerName="util" Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.454173 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a41223-aa3e-436b-917a-bf7ccf4d5587" containerName="util" Dec 12 16:01:47 crc kubenswrapper[4693]: E1212 16:01:47.454200 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a41223-aa3e-436b-917a-bf7ccf4d5587" containerName="extract" Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.454205 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a41223-aa3e-436b-917a-bf7ccf4d5587" containerName="extract" Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.454338 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="12a41223-aa3e-436b-917a-bf7ccf4d5587" containerName="extract" Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.455400 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dddlw" Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.476181 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dddlw"] Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.550180 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75b7f55e-826f-4a8d-bb2d-d253e5d0d624-utilities\") pod \"community-operators-dddlw\" (UID: \"75b7f55e-826f-4a8d-bb2d-d253e5d0d624\") " pod="openshift-marketplace/community-operators-dddlw" Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.550261 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75b7f55e-826f-4a8d-bb2d-d253e5d0d624-catalog-content\") pod \"community-operators-dddlw\" (UID: \"75b7f55e-826f-4a8d-bb2d-d253e5d0d624\") " pod="openshift-marketplace/community-operators-dddlw" Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.550333 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvkt7\" (UniqueName: \"kubernetes.io/projected/75b7f55e-826f-4a8d-bb2d-d253e5d0d624-kube-api-access-pvkt7\") pod \"community-operators-dddlw\" (UID: \"75b7f55e-826f-4a8d-bb2d-d253e5d0d624\") " pod="openshift-marketplace/community-operators-dddlw" Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.651202 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75b7f55e-826f-4a8d-bb2d-d253e5d0d624-catalog-content\") pod \"community-operators-dddlw\" (UID: \"75b7f55e-826f-4a8d-bb2d-d253e5d0d624\") " pod="openshift-marketplace/community-operators-dddlw" Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.651306 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvkt7\" (UniqueName: \"kubernetes.io/projected/75b7f55e-826f-4a8d-bb2d-d253e5d0d624-kube-api-access-pvkt7\") pod \"community-operators-dddlw\" (UID: \"75b7f55e-826f-4a8d-bb2d-d253e5d0d624\") " pod="openshift-marketplace/community-operators-dddlw" Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.651407 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75b7f55e-826f-4a8d-bb2d-d253e5d0d624-utilities\") pod \"community-operators-dddlw\" (UID: \"75b7f55e-826f-4a8d-bb2d-d253e5d0d624\") " pod="openshift-marketplace/community-operators-dddlw" Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.826346 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75b7f55e-826f-4a8d-bb2d-d253e5d0d624-utilities\") pod \"community-operators-dddlw\" (UID: \"75b7f55e-826f-4a8d-bb2d-d253e5d0d624\") " pod="openshift-marketplace/community-operators-dddlw" Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.832773 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75b7f55e-826f-4a8d-bb2d-d253e5d0d624-catalog-content\") pod \"community-operators-dddlw\" (UID: \"75b7f55e-826f-4a8d-bb2d-d253e5d0d624\") " pod="openshift-marketplace/community-operators-dddlw" Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.842599 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvkt7\" (UniqueName: \"kubernetes.io/projected/75b7f55e-826f-4a8d-bb2d-d253e5d0d624-kube-api-access-pvkt7\") pod \"community-operators-dddlw\" (UID: \"75b7f55e-826f-4a8d-bb2d-d253e5d0d624\") " pod="openshift-marketplace/community-operators-dddlw" Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.902372 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t" Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.955244 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s45ph\" (UniqueName: \"kubernetes.io/projected/ed544ab2-ef67-4adc-ab9f-73e92db27c87-kube-api-access-s45ph\") pod \"ed544ab2-ef67-4adc-ab9f-73e92db27c87\" (UID: \"ed544ab2-ef67-4adc-ab9f-73e92db27c87\") " Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.955318 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed544ab2-ef67-4adc-ab9f-73e92db27c87-bundle\") pod \"ed544ab2-ef67-4adc-ab9f-73e92db27c87\" (UID: \"ed544ab2-ef67-4adc-ab9f-73e92db27c87\") " Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.955391 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed544ab2-ef67-4adc-ab9f-73e92db27c87-util\") pod \"ed544ab2-ef67-4adc-ab9f-73e92db27c87\" (UID: \"ed544ab2-ef67-4adc-ab9f-73e92db27c87\") " Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.957706 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed544ab2-ef67-4adc-ab9f-73e92db27c87-bundle" (OuterVolumeSpecName: "bundle") pod "ed544ab2-ef67-4adc-ab9f-73e92db27c87" (UID: "ed544ab2-ef67-4adc-ab9f-73e92db27c87"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.959304 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed544ab2-ef67-4adc-ab9f-73e92db27c87-kube-api-access-s45ph" (OuterVolumeSpecName: "kube-api-access-s45ph") pod "ed544ab2-ef67-4adc-ab9f-73e92db27c87" (UID: "ed544ab2-ef67-4adc-ab9f-73e92db27c87"). InnerVolumeSpecName "kube-api-access-s45ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:01:47 crc kubenswrapper[4693]: I1212 16:01:47.971332 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed544ab2-ef67-4adc-ab9f-73e92db27c87-util" (OuterVolumeSpecName: "util") pod "ed544ab2-ef67-4adc-ab9f-73e92db27c87" (UID: "ed544ab2-ef67-4adc-ab9f-73e92db27c87"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:01:48 crc kubenswrapper[4693]: I1212 16:01:48.050688 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t" event={"ID":"ed544ab2-ef67-4adc-ab9f-73e92db27c87","Type":"ContainerDied","Data":"806182aab85c40960781eb093778a5d409efe63b119d02083f4e7c989966d47e"} Dec 12 16:01:48 crc kubenswrapper[4693]: I1212 16:01:48.050747 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="806182aab85c40960781eb093778a5d409efe63b119d02083f4e7c989966d47e" Dec 12 16:01:48 crc kubenswrapper[4693]: I1212 16:01:48.050707 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fnw86t" Dec 12 16:01:48 crc kubenswrapper[4693]: I1212 16:01:48.057094 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s45ph\" (UniqueName: \"kubernetes.io/projected/ed544ab2-ef67-4adc-ab9f-73e92db27c87-kube-api-access-s45ph\") on node \"crc\" DevicePath \"\"" Dec 12 16:01:48 crc kubenswrapper[4693]: I1212 16:01:48.057125 4693 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed544ab2-ef67-4adc-ab9f-73e92db27c87-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:01:48 crc kubenswrapper[4693]: I1212 16:01:48.057138 4693 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed544ab2-ef67-4adc-ab9f-73e92db27c87-util\") on node \"crc\" DevicePath \"\"" Dec 12 16:01:48 crc kubenswrapper[4693]: I1212 16:01:48.129530 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dddlw" Dec 12 16:01:48 crc kubenswrapper[4693]: I1212 16:01:48.311911 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dddlw"] Dec 12 16:01:48 crc kubenswrapper[4693]: W1212 16:01:48.319870 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75b7f55e_826f_4a8d_bb2d_d253e5d0d624.slice/crio-6dcab103f5abd84e43dd06e48e220865c3f2b5d0a34bbc6e9ac0aa0d6769fcc8 WatchSource:0}: Error finding container 6dcab103f5abd84e43dd06e48e220865c3f2b5d0a34bbc6e9ac0aa0d6769fcc8: Status 404 returned error can't find the container with id 6dcab103f5abd84e43dd06e48e220865c3f2b5d0a34bbc6e9ac0aa0d6769fcc8 Dec 12 16:01:49 crc kubenswrapper[4693]: I1212 16:01:49.058105 4693 generic.go:334] "Generic (PLEG): container finished" podID="75b7f55e-826f-4a8d-bb2d-d253e5d0d624" containerID="ca7222d66937027eadbb46663c1a419817f148dd66c7cb0b3893470ec6ff7225" exitCode=0 Dec 12 16:01:49 crc kubenswrapper[4693]: I1212 16:01:49.058154 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dddlw" event={"ID":"75b7f55e-826f-4a8d-bb2d-d253e5d0d624","Type":"ContainerDied","Data":"ca7222d66937027eadbb46663c1a419817f148dd66c7cb0b3893470ec6ff7225"} Dec 12 16:01:49 crc kubenswrapper[4693]: I1212 16:01:49.058184 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dddlw" event={"ID":"75b7f55e-826f-4a8d-bb2d-d253e5d0d624","Type":"ContainerStarted","Data":"6dcab103f5abd84e43dd06e48e220865c3f2b5d0a34bbc6e9ac0aa0d6769fcc8"} Dec 12 16:01:51 crc kubenswrapper[4693]: I1212 16:01:51.071236 4693 generic.go:334] "Generic (PLEG): container finished" podID="75b7f55e-826f-4a8d-bb2d-d253e5d0d624" containerID="7cb147a197135a354973a02a5a654bc4ce40e224c8543cf2edb60ce535b2c8d4" exitCode=0 Dec 12 16:01:51 crc kubenswrapper[4693]: I1212 16:01:51.071310 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dddlw" event={"ID":"75b7f55e-826f-4a8d-bb2d-d253e5d0d624","Type":"ContainerDied","Data":"7cb147a197135a354973a02a5a654bc4ce40e224c8543cf2edb60ce535b2c8d4"} Dec 12 16:01:51 crc kubenswrapper[4693]: I1212 16:01:51.442234 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8rm26"] Dec 12 16:01:51 crc kubenswrapper[4693]: I1212 16:01:51.442794 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8rm26" podUID="cd7423ac-012d-467a-864a-18108b91520f" containerName="registry-server" containerID="cri-o://c577febc4b3ba4c3bad408950c893ebc6c63d13bc989d655be29b4dc450902b2" gracePeriod=2 Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.079331 4693 generic.go:334] "Generic (PLEG): container finished" podID="cd7423ac-012d-467a-864a-18108b91520f" containerID="c577febc4b3ba4c3bad408950c893ebc6c63d13bc989d655be29b4dc450902b2" exitCode=0 Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.079476 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rm26" event={"ID":"cd7423ac-012d-467a-864a-18108b91520f","Type":"ContainerDied","Data":"c577febc4b3ba4c3bad408950c893ebc6c63d13bc989d655be29b4dc450902b2"} Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.339811 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8rm26" Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.413782 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hbs6\" (UniqueName: \"kubernetes.io/projected/cd7423ac-012d-467a-864a-18108b91520f-kube-api-access-9hbs6\") pod \"cd7423ac-012d-467a-864a-18108b91520f\" (UID: \"cd7423ac-012d-467a-864a-18108b91520f\") " Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.413856 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd7423ac-012d-467a-864a-18108b91520f-utilities\") pod \"cd7423ac-012d-467a-864a-18108b91520f\" (UID: \"cd7423ac-012d-467a-864a-18108b91520f\") " Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.413917 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd7423ac-012d-467a-864a-18108b91520f-catalog-content\") pod \"cd7423ac-012d-467a-864a-18108b91520f\" (UID: \"cd7423ac-012d-467a-864a-18108b91520f\") " Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.415972 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd7423ac-012d-467a-864a-18108b91520f-utilities" (OuterVolumeSpecName: "utilities") pod "cd7423ac-012d-467a-864a-18108b91520f" (UID: "cd7423ac-012d-467a-864a-18108b91520f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.423845 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd7423ac-012d-467a-864a-18108b91520f-kube-api-access-9hbs6" (OuterVolumeSpecName: "kube-api-access-9hbs6") pod "cd7423ac-012d-467a-864a-18108b91520f" (UID: "cd7423ac-012d-467a-864a-18108b91520f"). InnerVolumeSpecName "kube-api-access-9hbs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.437742 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd7423ac-012d-467a-864a-18108b91520f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd7423ac-012d-467a-864a-18108b91520f" (UID: "cd7423ac-012d-467a-864a-18108b91520f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.515742 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hbs6\" (UniqueName: \"kubernetes.io/projected/cd7423ac-012d-467a-864a-18108b91520f-kube-api-access-9hbs6\") on node \"crc\" DevicePath \"\"" Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.515786 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd7423ac-012d-467a-864a-18108b91520f-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.515798 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd7423ac-012d-467a-864a-18108b91520f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.691661 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-bhbcr"] Dec 12 16:01:52 crc kubenswrapper[4693]: E1212 16:01:52.691971 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd7423ac-012d-467a-864a-18108b91520f" containerName="registry-server" Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.691996 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd7423ac-012d-467a-864a-18108b91520f" containerName="registry-server" Dec 12 16:01:52 crc kubenswrapper[4693]: E1212 16:01:52.692011 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd7423ac-012d-467a-864a-18108b91520f" containerName="extract-content" Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.692020 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd7423ac-012d-467a-864a-18108b91520f" containerName="extract-content" Dec 12 16:01:52 crc kubenswrapper[4693]: E1212 16:01:52.692035 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed544ab2-ef67-4adc-ab9f-73e92db27c87" containerName="pull" Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.692044 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed544ab2-ef67-4adc-ab9f-73e92db27c87" containerName="pull" Dec 12 16:01:52 crc kubenswrapper[4693]: E1212 16:01:52.692071 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd7423ac-012d-467a-864a-18108b91520f" containerName="extract-utilities" Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.692079 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd7423ac-012d-467a-864a-18108b91520f" containerName="extract-utilities" Dec 12 16:01:52 crc kubenswrapper[4693]: E1212 16:01:52.692091 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed544ab2-ef67-4adc-ab9f-73e92db27c87" containerName="util" Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.692098 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed544ab2-ef67-4adc-ab9f-73e92db27c87" containerName="util" Dec 12 16:01:52 crc kubenswrapper[4693]: E1212 16:01:52.692109 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed544ab2-ef67-4adc-ab9f-73e92db27c87" containerName="extract" Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.692116 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed544ab2-ef67-4adc-ab9f-73e92db27c87" containerName="extract" Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.692240 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed544ab2-ef67-4adc-ab9f-73e92db27c87" containerName="extract" Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.692257 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd7423ac-012d-467a-864a-18108b91520f" containerName="registry-server" Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.692746 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-bhbcr" Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.694965 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-dppcr" Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.695300 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.695843 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.713007 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-bhbcr"] Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.818879 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sfhq\" (UniqueName: \"kubernetes.io/projected/c26c573c-b897-45a2-9dda-d9e612e46d1d-kube-api-access-2sfhq\") pod \"cluster-logging-operator-ff9846bd-bhbcr\" (UID: \"c26c573c-b897-45a2-9dda-d9e612e46d1d\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-bhbcr" Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.919681 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sfhq\" (UniqueName: \"kubernetes.io/projected/c26c573c-b897-45a2-9dda-d9e612e46d1d-kube-api-access-2sfhq\") pod \"cluster-logging-operator-ff9846bd-bhbcr\" (UID: \"c26c573c-b897-45a2-9dda-d9e612e46d1d\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-bhbcr" Dec 12 16:01:52 crc kubenswrapper[4693]: I1212 16:01:52.936448 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sfhq\" (UniqueName: \"kubernetes.io/projected/c26c573c-b897-45a2-9dda-d9e612e46d1d-kube-api-access-2sfhq\") pod \"cluster-logging-operator-ff9846bd-bhbcr\" (UID: \"c26c573c-b897-45a2-9dda-d9e612e46d1d\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-bhbcr" Dec 12 16:01:53 crc kubenswrapper[4693]: I1212 16:01:53.007593 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-bhbcr" Dec 12 16:01:53 crc kubenswrapper[4693]: I1212 16:01:53.087816 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8rm26" event={"ID":"cd7423ac-012d-467a-864a-18108b91520f","Type":"ContainerDied","Data":"3b91175e4741eadd2e6837fbb88e7557bf117b5ac9ef5f9dfd0b1cf1d307625d"} Dec 12 16:01:53 crc kubenswrapper[4693]: I1212 16:01:53.087835 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8rm26" Dec 12 16:01:53 crc kubenswrapper[4693]: I1212 16:01:53.087899 4693 scope.go:117] "RemoveContainer" containerID="c577febc4b3ba4c3bad408950c893ebc6c63d13bc989d655be29b4dc450902b2" Dec 12 16:01:53 crc kubenswrapper[4693]: I1212 16:01:53.091949 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dddlw" event={"ID":"75b7f55e-826f-4a8d-bb2d-d253e5d0d624","Type":"ContainerStarted","Data":"494298c92a77df71d9a44e0599b427ea179ba2cfc4a307d41b74a19e7cd24be7"} Dec 12 16:01:53 crc kubenswrapper[4693]: I1212 16:01:53.111711 4693 scope.go:117] "RemoveContainer" containerID="b5018d9542d8dcb3f8efb83e5411b587de52ae3b4eaa5ec9225aee26d0fbbfa8" Dec 12 16:01:53 crc kubenswrapper[4693]: I1212 16:01:53.113069 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dddlw" podStartSLOduration=3.192512975 podStartE2EDuration="6.113051961s" podCreationTimestamp="2025-12-12 16:01:47 +0000 UTC" firstStartedPulling="2025-12-12 16:01:49.059393506 +0000 UTC m=+936.228033107" lastFinishedPulling="2025-12-12 16:01:51.979932492 +0000 UTC m=+939.148572093" observedRunningTime="2025-12-12 16:01:53.112067915 +0000 UTC m=+940.280707516" watchObservedRunningTime="2025-12-12 16:01:53.113051961 +0000 UTC m=+940.281691572" Dec 12 16:01:53 crc kubenswrapper[4693]: I1212 16:01:53.178181 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8rm26"] Dec 12 16:01:53 crc kubenswrapper[4693]: I1212 16:01:53.184558 4693 scope.go:117] "RemoveContainer" containerID="b9210d2b8655dacf75934f55fdc9b5dd460f166a3fabe567bf3df3c49c5d52c0" Dec 12 16:01:53 crc kubenswrapper[4693]: I1212 16:01:53.191866 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8rm26"] Dec 12 16:01:53 crc kubenswrapper[4693]: I1212 16:01:53.366851 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd7423ac-012d-467a-864a-18108b91520f" path="/var/lib/kubelet/pods/cd7423ac-012d-467a-864a-18108b91520f/volumes" Dec 12 16:01:53 crc kubenswrapper[4693]: I1212 16:01:53.507119 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-bhbcr"] Dec 12 16:01:53 crc kubenswrapper[4693]: W1212 16:01:53.510730 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc26c573c_b897_45a2_9dda_d9e612e46d1d.slice/crio-38c047b432bb2d02e966ffac1caba30ac166485ba92bd5ab7e8b77dac23acc62 WatchSource:0}: Error finding container 38c047b432bb2d02e966ffac1caba30ac166485ba92bd5ab7e8b77dac23acc62: Status 404 returned error can't find the container with id 38c047b432bb2d02e966ffac1caba30ac166485ba92bd5ab7e8b77dac23acc62 Dec 12 16:01:54 crc kubenswrapper[4693]: I1212 16:01:54.099694 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-bhbcr" event={"ID":"c26c573c-b897-45a2-9dda-d9e612e46d1d","Type":"ContainerStarted","Data":"38c047b432bb2d02e966ffac1caba30ac166485ba92bd5ab7e8b77dac23acc62"} Dec 12 16:01:58 crc kubenswrapper[4693]: I1212 16:01:58.129767 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dddlw" Dec 12 16:01:58 crc kubenswrapper[4693]: I1212 16:01:58.130039 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dddlw" Dec 12 16:01:58 crc kubenswrapper[4693]: I1212 16:01:58.176255 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dddlw" Dec 12 16:01:59 crc kubenswrapper[4693]: I1212 16:01:59.180498 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dddlw" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.521880 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r"] Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.523548 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.526013 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.526030 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.526254 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.526686 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.526812 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.527925 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-8g977" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.542972 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r"] Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.561220 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vndqd\" (UniqueName: \"kubernetes.io/projected/ac702b24-9bff-4198-a7f7-e368773fb8de-kube-api-access-vndqd\") pod \"loki-operator-controller-manager-5c67884d5c-jpl4r\" (UID: \"ac702b24-9bff-4198-a7f7-e368773fb8de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.561260 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ac702b24-9bff-4198-a7f7-e368773fb8de-webhook-cert\") pod \"loki-operator-controller-manager-5c67884d5c-jpl4r\" (UID: \"ac702b24-9bff-4198-a7f7-e368773fb8de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.561310 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ac702b24-9bff-4198-a7f7-e368773fb8de-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5c67884d5c-jpl4r\" (UID: \"ac702b24-9bff-4198-a7f7-e368773fb8de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.561337 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/ac702b24-9bff-4198-a7f7-e368773fb8de-manager-config\") pod \"loki-operator-controller-manager-5c67884d5c-jpl4r\" (UID: \"ac702b24-9bff-4198-a7f7-e368773fb8de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.561354 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ac702b24-9bff-4198-a7f7-e368773fb8de-apiservice-cert\") pod \"loki-operator-controller-manager-5c67884d5c-jpl4r\" (UID: \"ac702b24-9bff-4198-a7f7-e368773fb8de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.662435 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vndqd\" (UniqueName: \"kubernetes.io/projected/ac702b24-9bff-4198-a7f7-e368773fb8de-kube-api-access-vndqd\") pod \"loki-operator-controller-manager-5c67884d5c-jpl4r\" (UID: \"ac702b24-9bff-4198-a7f7-e368773fb8de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.662474 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ac702b24-9bff-4198-a7f7-e368773fb8de-webhook-cert\") pod \"loki-operator-controller-manager-5c67884d5c-jpl4r\" (UID: \"ac702b24-9bff-4198-a7f7-e368773fb8de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.662520 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ac702b24-9bff-4198-a7f7-e368773fb8de-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5c67884d5c-jpl4r\" (UID: \"ac702b24-9bff-4198-a7f7-e368773fb8de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.662552 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/ac702b24-9bff-4198-a7f7-e368773fb8de-manager-config\") pod \"loki-operator-controller-manager-5c67884d5c-jpl4r\" (UID: \"ac702b24-9bff-4198-a7f7-e368773fb8de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.662570 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ac702b24-9bff-4198-a7f7-e368773fb8de-apiservice-cert\") pod \"loki-operator-controller-manager-5c67884d5c-jpl4r\" (UID: \"ac702b24-9bff-4198-a7f7-e368773fb8de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.663694 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/ac702b24-9bff-4198-a7f7-e368773fb8de-manager-config\") pod \"loki-operator-controller-manager-5c67884d5c-jpl4r\" (UID: \"ac702b24-9bff-4198-a7f7-e368773fb8de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.670634 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ac702b24-9bff-4198-a7f7-e368773fb8de-webhook-cert\") pod \"loki-operator-controller-manager-5c67884d5c-jpl4r\" (UID: \"ac702b24-9bff-4198-a7f7-e368773fb8de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.671107 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ac702b24-9bff-4198-a7f7-e368773fb8de-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5c67884d5c-jpl4r\" (UID: \"ac702b24-9bff-4198-a7f7-e368773fb8de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.672841 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ac702b24-9bff-4198-a7f7-e368773fb8de-apiservice-cert\") pod \"loki-operator-controller-manager-5c67884d5c-jpl4r\" (UID: \"ac702b24-9bff-4198-a7f7-e368773fb8de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.682512 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vndqd\" (UniqueName: \"kubernetes.io/projected/ac702b24-9bff-4198-a7f7-e368773fb8de-kube-api-access-vndqd\") pod \"loki-operator-controller-manager-5c67884d5c-jpl4r\" (UID: \"ac702b24-9bff-4198-a7f7-e368773fb8de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.843495 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dddlw"] Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.843759 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dddlw" podUID="75b7f55e-826f-4a8d-bb2d-d253e5d0d624" containerName="registry-server" containerID="cri-o://494298c92a77df71d9a44e0599b427ea179ba2cfc4a307d41b74a19e7cd24be7" gracePeriod=2 Dec 12 16:02:01 crc kubenswrapper[4693]: I1212 16:02:01.846148 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" Dec 12 16:02:03 crc kubenswrapper[4693]: I1212 16:02:03.189092 4693 generic.go:334] "Generic (PLEG): container finished" podID="75b7f55e-826f-4a8d-bb2d-d253e5d0d624" containerID="494298c92a77df71d9a44e0599b427ea179ba2cfc4a307d41b74a19e7cd24be7" exitCode=0 Dec 12 16:02:03 crc kubenswrapper[4693]: I1212 16:02:03.189167 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dddlw" event={"ID":"75b7f55e-826f-4a8d-bb2d-d253e5d0d624","Type":"ContainerDied","Data":"494298c92a77df71d9a44e0599b427ea179ba2cfc4a307d41b74a19e7cd24be7"} Dec 12 16:02:03 crc kubenswrapper[4693]: I1212 16:02:03.189675 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dddlw" event={"ID":"75b7f55e-826f-4a8d-bb2d-d253e5d0d624","Type":"ContainerDied","Data":"6dcab103f5abd84e43dd06e48e220865c3f2b5d0a34bbc6e9ac0aa0d6769fcc8"} Dec 12 16:02:03 crc kubenswrapper[4693]: I1212 16:02:03.189694 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dcab103f5abd84e43dd06e48e220865c3f2b5d0a34bbc6e9ac0aa0d6769fcc8" Dec 12 16:02:03 crc kubenswrapper[4693]: I1212 16:02:03.238144 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dddlw" Dec 12 16:02:03 crc kubenswrapper[4693]: I1212 16:02:03.296423 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75b7f55e-826f-4a8d-bb2d-d253e5d0d624-catalog-content\") pod \"75b7f55e-826f-4a8d-bb2d-d253e5d0d624\" (UID: \"75b7f55e-826f-4a8d-bb2d-d253e5d0d624\") " Dec 12 16:02:03 crc kubenswrapper[4693]: I1212 16:02:03.296503 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvkt7\" (UniqueName: \"kubernetes.io/projected/75b7f55e-826f-4a8d-bb2d-d253e5d0d624-kube-api-access-pvkt7\") pod \"75b7f55e-826f-4a8d-bb2d-d253e5d0d624\" (UID: \"75b7f55e-826f-4a8d-bb2d-d253e5d0d624\") " Dec 12 16:02:03 crc kubenswrapper[4693]: I1212 16:02:03.296531 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75b7f55e-826f-4a8d-bb2d-d253e5d0d624-utilities\") pod \"75b7f55e-826f-4a8d-bb2d-d253e5d0d624\" (UID: \"75b7f55e-826f-4a8d-bb2d-d253e5d0d624\") " Dec 12 16:02:03 crc kubenswrapper[4693]: I1212 16:02:03.297720 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75b7f55e-826f-4a8d-bb2d-d253e5d0d624-utilities" (OuterVolumeSpecName: "utilities") pod "75b7f55e-826f-4a8d-bb2d-d253e5d0d624" (UID: "75b7f55e-826f-4a8d-bb2d-d253e5d0d624"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:02:03 crc kubenswrapper[4693]: I1212 16:02:03.304311 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b7f55e-826f-4a8d-bb2d-d253e5d0d624-kube-api-access-pvkt7" (OuterVolumeSpecName: "kube-api-access-pvkt7") pod "75b7f55e-826f-4a8d-bb2d-d253e5d0d624" (UID: "75b7f55e-826f-4a8d-bb2d-d253e5d0d624"). InnerVolumeSpecName "kube-api-access-pvkt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:02:03 crc kubenswrapper[4693]: I1212 16:02:03.375526 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75b7f55e-826f-4a8d-bb2d-d253e5d0d624-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75b7f55e-826f-4a8d-bb2d-d253e5d0d624" (UID: "75b7f55e-826f-4a8d-bb2d-d253e5d0d624"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:02:03 crc kubenswrapper[4693]: I1212 16:02:03.398798 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75b7f55e-826f-4a8d-bb2d-d253e5d0d624-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:02:03 crc kubenswrapper[4693]: I1212 16:02:03.398836 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvkt7\" (UniqueName: \"kubernetes.io/projected/75b7f55e-826f-4a8d-bb2d-d253e5d0d624-kube-api-access-pvkt7\") on node \"crc\" DevicePath \"\"" Dec 12 16:02:03 crc kubenswrapper[4693]: I1212 16:02:03.398850 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75b7f55e-826f-4a8d-bb2d-d253e5d0d624-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:02:03 crc kubenswrapper[4693]: I1212 16:02:03.402317 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r"] Dec 12 16:02:04 crc kubenswrapper[4693]: I1212 16:02:04.197444 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" event={"ID":"ac702b24-9bff-4198-a7f7-e368773fb8de","Type":"ContainerStarted","Data":"d5ad72918e61e97cf987d847127d50944d618cc0742676a310ffad8a09444b45"} Dec 12 16:02:04 crc kubenswrapper[4693]: I1212 16:02:04.199267 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-bhbcr" event={"ID":"c26c573c-b897-45a2-9dda-d9e612e46d1d","Type":"ContainerStarted","Data":"a3ff7a0c8c8702a9a14d8393a8460d3e6081d18111ff7827463c70bd3a57607d"} Dec 12 16:02:04 crc kubenswrapper[4693]: I1212 16:02:04.199289 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dddlw" Dec 12 16:02:04 crc kubenswrapper[4693]: I1212 16:02:04.229505 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-ff9846bd-bhbcr" podStartSLOduration=2.721848876 podStartE2EDuration="12.229484974s" podCreationTimestamp="2025-12-12 16:01:52 +0000 UTC" firstStartedPulling="2025-12-12 16:01:53.513573802 +0000 UTC m=+940.682213403" lastFinishedPulling="2025-12-12 16:02:03.0212099 +0000 UTC m=+950.189849501" observedRunningTime="2025-12-12 16:02:04.222443944 +0000 UTC m=+951.391083545" watchObservedRunningTime="2025-12-12 16:02:04.229484974 +0000 UTC m=+951.398124575" Dec 12 16:02:04 crc kubenswrapper[4693]: I1212 16:02:04.242877 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dddlw"] Dec 12 16:02:04 crc kubenswrapper[4693]: I1212 16:02:04.248361 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dddlw"] Dec 12 16:02:05 crc kubenswrapper[4693]: I1212 16:02:05.364445 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b7f55e-826f-4a8d-bb2d-d253e5d0d624" path="/var/lib/kubelet/pods/75b7f55e-826f-4a8d-bb2d-d253e5d0d624/volumes" Dec 12 16:02:08 crc kubenswrapper[4693]: I1212 16:02:08.224407 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" event={"ID":"ac702b24-9bff-4198-a7f7-e368773fb8de","Type":"ContainerStarted","Data":"60654a71b8521a1430abdd53330fab7e37487319b78244d9d95d992d395b2c24"} Dec 12 16:02:12 crc kubenswrapper[4693]: I1212 16:02:12.530776 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:02:12 crc kubenswrapper[4693]: I1212 16:02:12.531201 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:02:14 crc kubenswrapper[4693]: I1212 16:02:14.266916 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" event={"ID":"ac702b24-9bff-4198-a7f7-e368773fb8de","Type":"ContainerStarted","Data":"6174aff8b3a45d378b0ebdf868e9aefc0358699d13006769a64b7fbe1166fdce"} Dec 12 16:02:14 crc kubenswrapper[4693]: I1212 16:02:14.267574 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" Dec 12 16:02:14 crc kubenswrapper[4693]: I1212 16:02:14.269800 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" Dec 12 16:02:14 crc kubenswrapper[4693]: I1212 16:02:14.289901 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" podStartSLOduration=2.659283231 podStartE2EDuration="13.289878384s" podCreationTimestamp="2025-12-12 16:02:01 +0000 UTC" firstStartedPulling="2025-12-12 16:02:03.411722821 +0000 UTC m=+950.580362422" lastFinishedPulling="2025-12-12 16:02:14.042317974 +0000 UTC m=+961.210957575" observedRunningTime="2025-12-12 16:02:14.284666754 +0000 UTC m=+961.453306355" watchObservedRunningTime="2025-12-12 16:02:14.289878384 +0000 UTC m=+961.458517985" Dec 12 16:02:17 crc kubenswrapper[4693]: I1212 16:02:17.882196 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Dec 12 16:02:17 crc kubenswrapper[4693]: E1212 16:02:17.883351 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b7f55e-826f-4a8d-bb2d-d253e5d0d624" containerName="extract-content" Dec 12 16:02:17 crc kubenswrapper[4693]: I1212 16:02:17.883372 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b7f55e-826f-4a8d-bb2d-d253e5d0d624" containerName="extract-content" Dec 12 16:02:17 crc kubenswrapper[4693]: E1212 16:02:17.883400 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b7f55e-826f-4a8d-bb2d-d253e5d0d624" containerName="extract-utilities" Dec 12 16:02:17 crc kubenswrapper[4693]: I1212 16:02:17.883412 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b7f55e-826f-4a8d-bb2d-d253e5d0d624" containerName="extract-utilities" Dec 12 16:02:17 crc kubenswrapper[4693]: E1212 16:02:17.883430 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b7f55e-826f-4a8d-bb2d-d253e5d0d624" containerName="registry-server" Dec 12 16:02:17 crc kubenswrapper[4693]: I1212 16:02:17.883443 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b7f55e-826f-4a8d-bb2d-d253e5d0d624" containerName="registry-server" Dec 12 16:02:17 crc kubenswrapper[4693]: I1212 16:02:17.883661 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b7f55e-826f-4a8d-bb2d-d253e5d0d624" containerName="registry-server" Dec 12 16:02:17 crc kubenswrapper[4693]: I1212 16:02:17.884622 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Dec 12 16:02:17 crc kubenswrapper[4693]: I1212 16:02:17.886885 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Dec 12 16:02:17 crc kubenswrapper[4693]: I1212 16:02:17.887579 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Dec 12 16:02:17 crc kubenswrapper[4693]: I1212 16:02:17.888225 4693 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-c7ddz" Dec 12 16:02:17 crc kubenswrapper[4693]: I1212 16:02:17.889123 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Dec 12 16:02:18 crc kubenswrapper[4693]: I1212 16:02:18.017550 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4df6fe31-4f2d-4b93-8801-38f3117f58c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4df6fe31-4f2d-4b93-8801-38f3117f58c9\") pod \"minio\" (UID: \"ded6d19e-72f0-4253-a08f-b9e3fa094937\") " pod="minio-dev/minio" Dec 12 16:02:18 crc kubenswrapper[4693]: I1212 16:02:18.017623 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fbfm\" (UniqueName: \"kubernetes.io/projected/ded6d19e-72f0-4253-a08f-b9e3fa094937-kube-api-access-4fbfm\") pod \"minio\" (UID: \"ded6d19e-72f0-4253-a08f-b9e3fa094937\") " pod="minio-dev/minio" Dec 12 16:02:18 crc kubenswrapper[4693]: I1212 16:02:18.118478 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fbfm\" (UniqueName: \"kubernetes.io/projected/ded6d19e-72f0-4253-a08f-b9e3fa094937-kube-api-access-4fbfm\") pod \"minio\" (UID: \"ded6d19e-72f0-4253-a08f-b9e3fa094937\") " pod="minio-dev/minio" Dec 12 16:02:18 crc kubenswrapper[4693]: I1212 16:02:18.118611 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4df6fe31-4f2d-4b93-8801-38f3117f58c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4df6fe31-4f2d-4b93-8801-38f3117f58c9\") pod \"minio\" (UID: \"ded6d19e-72f0-4253-a08f-b9e3fa094937\") " pod="minio-dev/minio" Dec 12 16:02:18 crc kubenswrapper[4693]: I1212 16:02:18.121507 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:02:18 crc kubenswrapper[4693]: I1212 16:02:18.121546 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4df6fe31-4f2d-4b93-8801-38f3117f58c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4df6fe31-4f2d-4b93-8801-38f3117f58c9\") pod \"minio\" (UID: \"ded6d19e-72f0-4253-a08f-b9e3fa094937\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/29df3748ddaf91c66b0b7872d78f64e708b99317e9efb2e1cd6d24bb5b33ec7a/globalmount\"" pod="minio-dev/minio" Dec 12 16:02:18 crc kubenswrapper[4693]: I1212 16:02:18.136346 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fbfm\" (UniqueName: \"kubernetes.io/projected/ded6d19e-72f0-4253-a08f-b9e3fa094937-kube-api-access-4fbfm\") pod \"minio\" (UID: \"ded6d19e-72f0-4253-a08f-b9e3fa094937\") " pod="minio-dev/minio" Dec 12 16:02:18 crc kubenswrapper[4693]: I1212 16:02:18.157130 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4df6fe31-4f2d-4b93-8801-38f3117f58c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4df6fe31-4f2d-4b93-8801-38f3117f58c9\") pod \"minio\" (UID: \"ded6d19e-72f0-4253-a08f-b9e3fa094937\") " pod="minio-dev/minio" Dec 12 16:02:18 crc kubenswrapper[4693]: I1212 16:02:18.206293 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Dec 12 16:02:18 crc kubenswrapper[4693]: I1212 16:02:18.652850 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Dec 12 16:02:19 crc kubenswrapper[4693]: I1212 16:02:19.302104 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"ded6d19e-72f0-4253-a08f-b9e3fa094937","Type":"ContainerStarted","Data":"788ff52eee2f395430bb53452fa2c28fc768795d2fefe25e418749252c241d4c"} Dec 12 16:02:31 crc kubenswrapper[4693]: I1212 16:02:31.401835 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"ded6d19e-72f0-4253-a08f-b9e3fa094937","Type":"ContainerStarted","Data":"83a1bf9a000aabbf13d5d46f97f1f27dc94b9d46be0b9ce34864eaf08d7cb19e"} Dec 12 16:02:31 crc kubenswrapper[4693]: I1212 16:02:31.438926 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.500184269 podStartE2EDuration="16.438905772s" podCreationTimestamp="2025-12-12 16:02:15 +0000 UTC" firstStartedPulling="2025-12-12 16:02:18.655648708 +0000 UTC m=+965.824288309" lastFinishedPulling="2025-12-12 16:02:30.594370221 +0000 UTC m=+977.763009812" observedRunningTime="2025-12-12 16:02:31.425322716 +0000 UTC m=+978.593962317" watchObservedRunningTime="2025-12-12 16:02:31.438905772 +0000 UTC m=+978.607545373" Dec 12 16:02:34 crc kubenswrapper[4693]: I1212 16:02:34.893654 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk"] Dec 12 16:02:34 crc kubenswrapper[4693]: I1212 16:02:34.895196 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" Dec 12 16:02:34 crc kubenswrapper[4693]: I1212 16:02:34.900379 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Dec 12 16:02:34 crc kubenswrapper[4693]: I1212 16:02:34.900571 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Dec 12 16:02:34 crc kubenswrapper[4693]: I1212 16:02:34.901193 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Dec 12 16:02:34 crc kubenswrapper[4693]: I1212 16:02:34.901266 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Dec 12 16:02:34 crc kubenswrapper[4693]: I1212 16:02:34.901434 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-v8xln" Dec 12 16:02:34 crc kubenswrapper[4693]: I1212 16:02:34.909909 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk"] Dec 12 16:02:34 crc kubenswrapper[4693]: I1212 16:02:34.983754 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znfx6\" (UniqueName: \"kubernetes.io/projected/fee116a3-18ff-4755-b34f-82baa25eeefd-kube-api-access-znfx6\") pod \"logging-loki-distributor-76cc67bf56-fsnzk\" (UID: \"fee116a3-18ff-4755-b34f-82baa25eeefd\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" Dec 12 16:02:34 crc kubenswrapper[4693]: I1212 16:02:34.983872 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee116a3-18ff-4755-b34f-82baa25eeefd-config\") pod \"logging-loki-distributor-76cc67bf56-fsnzk\" (UID: \"fee116a3-18ff-4755-b34f-82baa25eeefd\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" Dec 12 16:02:34 crc kubenswrapper[4693]: I1212 16:02:34.983929 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/fee116a3-18ff-4755-b34f-82baa25eeefd-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-fsnzk\" (UID: \"fee116a3-18ff-4755-b34f-82baa25eeefd\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" Dec 12 16:02:34 crc kubenswrapper[4693]: I1212 16:02:34.984001 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/fee116a3-18ff-4755-b34f-82baa25eeefd-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-fsnzk\" (UID: \"fee116a3-18ff-4755-b34f-82baa25eeefd\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" Dec 12 16:02:34 crc kubenswrapper[4693]: I1212 16:02:34.984052 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fee116a3-18ff-4755-b34f-82baa25eeefd-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-fsnzk\" (UID: \"fee116a3-18ff-4755-b34f-82baa25eeefd\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.035441 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-gcwwx"] Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.036313 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.038686 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.038866 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.039966 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.050534 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-gcwwx"] Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.084696 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/fee116a3-18ff-4755-b34f-82baa25eeefd-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-fsnzk\" (UID: \"fee116a3-18ff-4755-b34f-82baa25eeefd\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.084748 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5txxk\" (UniqueName: \"kubernetes.io/projected/b89219b7-2b92-44d8-897c-beb5ef9d6861-kube-api-access-5txxk\") pod \"logging-loki-querier-5895d59bb8-gcwwx\" (UID: \"b89219b7-2b92-44d8-897c-beb5ef9d6861\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.084771 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/b89219b7-2b92-44d8-897c-beb5ef9d6861-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-gcwwx\" (UID: \"b89219b7-2b92-44d8-897c-beb5ef9d6861\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.084792 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/fee116a3-18ff-4755-b34f-82baa25eeefd-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-fsnzk\" (UID: \"fee116a3-18ff-4755-b34f-82baa25eeefd\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.084807 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b89219b7-2b92-44d8-897c-beb5ef9d6861-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-gcwwx\" (UID: \"b89219b7-2b92-44d8-897c-beb5ef9d6861\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.084845 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fee116a3-18ff-4755-b34f-82baa25eeefd-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-fsnzk\" (UID: \"fee116a3-18ff-4755-b34f-82baa25eeefd\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.084869 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b89219b7-2b92-44d8-897c-beb5ef9d6861-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-gcwwx\" (UID: \"b89219b7-2b92-44d8-897c-beb5ef9d6861\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.084909 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b89219b7-2b92-44d8-897c-beb5ef9d6861-config\") pod \"logging-loki-querier-5895d59bb8-gcwwx\" (UID: \"b89219b7-2b92-44d8-897c-beb5ef9d6861\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.084927 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znfx6\" (UniqueName: \"kubernetes.io/projected/fee116a3-18ff-4755-b34f-82baa25eeefd-kube-api-access-znfx6\") pod \"logging-loki-distributor-76cc67bf56-fsnzk\" (UID: \"fee116a3-18ff-4755-b34f-82baa25eeefd\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.084946 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/b89219b7-2b92-44d8-897c-beb5ef9d6861-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-gcwwx\" (UID: \"b89219b7-2b92-44d8-897c-beb5ef9d6861\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.084966 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee116a3-18ff-4755-b34f-82baa25eeefd-config\") pod \"logging-loki-distributor-76cc67bf56-fsnzk\" (UID: \"fee116a3-18ff-4755-b34f-82baa25eeefd\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.085893 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee116a3-18ff-4755-b34f-82baa25eeefd-config\") pod \"logging-loki-distributor-76cc67bf56-fsnzk\" (UID: \"fee116a3-18ff-4755-b34f-82baa25eeefd\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.086994 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fee116a3-18ff-4755-b34f-82baa25eeefd-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-fsnzk\" (UID: \"fee116a3-18ff-4755-b34f-82baa25eeefd\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.090763 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/fee116a3-18ff-4755-b34f-82baa25eeefd-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-fsnzk\" (UID: \"fee116a3-18ff-4755-b34f-82baa25eeefd\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.091214 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/fee116a3-18ff-4755-b34f-82baa25eeefd-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-fsnzk\" (UID: \"fee116a3-18ff-4755-b34f-82baa25eeefd\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.113240 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znfx6\" (UniqueName: \"kubernetes.io/projected/fee116a3-18ff-4755-b34f-82baa25eeefd-kube-api-access-znfx6\") pod \"logging-loki-distributor-76cc67bf56-fsnzk\" (UID: \"fee116a3-18ff-4755-b34f-82baa25eeefd\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.140244 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw"] Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.141211 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.142933 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.147350 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.165236 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw"] Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.188718 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5txxk\" (UniqueName: \"kubernetes.io/projected/b89219b7-2b92-44d8-897c-beb5ef9d6861-kube-api-access-5txxk\") pod \"logging-loki-querier-5895d59bb8-gcwwx\" (UID: \"b89219b7-2b92-44d8-897c-beb5ef9d6861\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.188766 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/b89219b7-2b92-44d8-897c-beb5ef9d6861-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-gcwwx\" (UID: \"b89219b7-2b92-44d8-897c-beb5ef9d6861\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.188794 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/2691b993-4939-4cf2-84ab-1d34ea3dded9-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-ht5fw\" (UID: \"2691b993-4939-4cf2-84ab-1d34ea3dded9\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.188815 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b89219b7-2b92-44d8-897c-beb5ef9d6861-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-gcwwx\" (UID: \"b89219b7-2b92-44d8-897c-beb5ef9d6861\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.188851 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2691b993-4939-4cf2-84ab-1d34ea3dded9-config\") pod \"logging-loki-query-frontend-84558f7c9f-ht5fw\" (UID: \"2691b993-4939-4cf2-84ab-1d34ea3dded9\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.188890 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b89219b7-2b92-44d8-897c-beb5ef9d6861-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-gcwwx\" (UID: \"b89219b7-2b92-44d8-897c-beb5ef9d6861\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.188926 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/2691b993-4939-4cf2-84ab-1d34ea3dded9-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-ht5fw\" (UID: \"2691b993-4939-4cf2-84ab-1d34ea3dded9\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.188964 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b89219b7-2b92-44d8-897c-beb5ef9d6861-config\") pod \"logging-loki-querier-5895d59bb8-gcwwx\" (UID: \"b89219b7-2b92-44d8-897c-beb5ef9d6861\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.189000 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/b89219b7-2b92-44d8-897c-beb5ef9d6861-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-gcwwx\" (UID: \"b89219b7-2b92-44d8-897c-beb5ef9d6861\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.189019 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brszf\" (UniqueName: \"kubernetes.io/projected/2691b993-4939-4cf2-84ab-1d34ea3dded9-kube-api-access-brszf\") pod \"logging-loki-query-frontend-84558f7c9f-ht5fw\" (UID: \"2691b993-4939-4cf2-84ab-1d34ea3dded9\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.189037 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2691b993-4939-4cf2-84ab-1d34ea3dded9-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-ht5fw\" (UID: \"2691b993-4939-4cf2-84ab-1d34ea3dded9\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.191057 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b89219b7-2b92-44d8-897c-beb5ef9d6861-config\") pod \"logging-loki-querier-5895d59bb8-gcwwx\" (UID: \"b89219b7-2b92-44d8-897c-beb5ef9d6861\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.191117 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b89219b7-2b92-44d8-897c-beb5ef9d6861-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-gcwwx\" (UID: \"b89219b7-2b92-44d8-897c-beb5ef9d6861\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.195671 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/b89219b7-2b92-44d8-897c-beb5ef9d6861-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-gcwwx\" (UID: \"b89219b7-2b92-44d8-897c-beb5ef9d6861\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.195834 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b89219b7-2b92-44d8-897c-beb5ef9d6861-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-gcwwx\" (UID: \"b89219b7-2b92-44d8-897c-beb5ef9d6861\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.198940 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/b89219b7-2b92-44d8-897c-beb5ef9d6861-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-gcwwx\" (UID: \"b89219b7-2b92-44d8-897c-beb5ef9d6861\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.211312 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5txxk\" (UniqueName: \"kubernetes.io/projected/b89219b7-2b92-44d8-897c-beb5ef9d6861-kube-api-access-5txxk\") pod \"logging-loki-querier-5895d59bb8-gcwwx\" (UID: \"b89219b7-2b92-44d8-897c-beb5ef9d6861\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.212990 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.241858 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5665b75b44-6fjrq"] Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.262916 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5665b75b44-jstfj"] Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.263692 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.264009 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5665b75b44-6fjrq"] Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.264080 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.267973 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.268234 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.268354 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.268573 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.268697 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-vw25g" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.268803 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.286945 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5665b75b44-jstfj"] Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.294029 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/34840dce-2cd9-4cf3-81cc-be2fb6e08993-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.294067 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brszf\" (UniqueName: \"kubernetes.io/projected/2691b993-4939-4cf2-84ab-1d34ea3dded9-kube-api-access-brszf\") pod \"logging-loki-query-frontend-84558f7c9f-ht5fw\" (UID: \"2691b993-4939-4cf2-84ab-1d34ea3dded9\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.294087 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/752b64e1-40d2-47cd-a555-0e23495e2443-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.294107 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2691b993-4939-4cf2-84ab-1d34ea3dded9-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-ht5fw\" (UID: \"2691b993-4939-4cf2-84ab-1d34ea3dded9\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.294123 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34840dce-2cd9-4cf3-81cc-be2fb6e08993-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.294141 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/752b64e1-40d2-47cd-a555-0e23495e2443-tenants\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.294167 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/34840dce-2cd9-4cf3-81cc-be2fb6e08993-lokistack-gateway\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.294184 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/752b64e1-40d2-47cd-a555-0e23495e2443-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.294208 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/34840dce-2cd9-4cf3-81cc-be2fb6e08993-tls-secret\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.294228 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/2691b993-4939-4cf2-84ab-1d34ea3dded9-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-ht5fw\" (UID: \"2691b993-4939-4cf2-84ab-1d34ea3dded9\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.294249 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm22b\" (UniqueName: \"kubernetes.io/projected/34840dce-2cd9-4cf3-81cc-be2fb6e08993-kube-api-access-gm22b\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.294281 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2691b993-4939-4cf2-84ab-1d34ea3dded9-config\") pod \"logging-loki-query-frontend-84558f7c9f-ht5fw\" (UID: \"2691b993-4939-4cf2-84ab-1d34ea3dded9\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.294310 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vndv2\" (UniqueName: \"kubernetes.io/projected/752b64e1-40d2-47cd-a555-0e23495e2443-kube-api-access-vndv2\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.294329 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/752b64e1-40d2-47cd-a555-0e23495e2443-lokistack-gateway\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.294346 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/34840dce-2cd9-4cf3-81cc-be2fb6e08993-tenants\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.294365 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/752b64e1-40d2-47cd-a555-0e23495e2443-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.294384 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34840dce-2cd9-4cf3-81cc-be2fb6e08993-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.294405 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/752b64e1-40d2-47cd-a555-0e23495e2443-tls-secret\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.294423 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/2691b993-4939-4cf2-84ab-1d34ea3dded9-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-ht5fw\" (UID: \"2691b993-4939-4cf2-84ab-1d34ea3dded9\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.294438 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/752b64e1-40d2-47cd-a555-0e23495e2443-rbac\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.294454 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/34840dce-2cd9-4cf3-81cc-be2fb6e08993-rbac\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.295342 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2691b993-4939-4cf2-84ab-1d34ea3dded9-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-ht5fw\" (UID: \"2691b993-4939-4cf2-84ab-1d34ea3dded9\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.298371 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2691b993-4939-4cf2-84ab-1d34ea3dded9-config\") pod \"logging-loki-query-frontend-84558f7c9f-ht5fw\" (UID: \"2691b993-4939-4cf2-84ab-1d34ea3dded9\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.313480 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/2691b993-4939-4cf2-84ab-1d34ea3dded9-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-ht5fw\" (UID: \"2691b993-4939-4cf2-84ab-1d34ea3dded9\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.314552 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/2691b993-4939-4cf2-84ab-1d34ea3dded9-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-ht5fw\" (UID: \"2691b993-4939-4cf2-84ab-1d34ea3dded9\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.328175 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brszf\" (UniqueName: \"kubernetes.io/projected/2691b993-4939-4cf2-84ab-1d34ea3dded9-kube-api-access-brszf\") pod \"logging-loki-query-frontend-84558f7c9f-ht5fw\" (UID: \"2691b993-4939-4cf2-84ab-1d34ea3dded9\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.350417 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.397322 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/34840dce-2cd9-4cf3-81cc-be2fb6e08993-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.397368 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/752b64e1-40d2-47cd-a555-0e23495e2443-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.397397 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34840dce-2cd9-4cf3-81cc-be2fb6e08993-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.397420 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/752b64e1-40d2-47cd-a555-0e23495e2443-tenants\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.397474 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/34840dce-2cd9-4cf3-81cc-be2fb6e08993-lokistack-gateway\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.397500 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/752b64e1-40d2-47cd-a555-0e23495e2443-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.397531 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/34840dce-2cd9-4cf3-81cc-be2fb6e08993-tls-secret\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.397577 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm22b\" (UniqueName: \"kubernetes.io/projected/34840dce-2cd9-4cf3-81cc-be2fb6e08993-kube-api-access-gm22b\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.397634 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vndv2\" (UniqueName: \"kubernetes.io/projected/752b64e1-40d2-47cd-a555-0e23495e2443-kube-api-access-vndv2\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.397672 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/752b64e1-40d2-47cd-a555-0e23495e2443-lokistack-gateway\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.397705 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/34840dce-2cd9-4cf3-81cc-be2fb6e08993-tenants\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.397736 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/752b64e1-40d2-47cd-a555-0e23495e2443-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.397762 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34840dce-2cd9-4cf3-81cc-be2fb6e08993-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.397791 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/752b64e1-40d2-47cd-a555-0e23495e2443-tls-secret\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.397823 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/752b64e1-40d2-47cd-a555-0e23495e2443-rbac\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.397850 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/34840dce-2cd9-4cf3-81cc-be2fb6e08993-rbac\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: E1212 16:02:35.398376 4693 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Dec 12 16:02:35 crc kubenswrapper[4693]: E1212 16:02:35.398433 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34840dce-2cd9-4cf3-81cc-be2fb6e08993-tls-secret podName:34840dce-2cd9-4cf3-81cc-be2fb6e08993 nodeName:}" failed. No retries permitted until 2025-12-12 16:02:35.898415888 +0000 UTC m=+983.067055479 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/34840dce-2cd9-4cf3-81cc-be2fb6e08993-tls-secret") pod "logging-loki-gateway-5665b75b44-6fjrq" (UID: "34840dce-2cd9-4cf3-81cc-be2fb6e08993") : secret "logging-loki-gateway-http" not found Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.399381 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/752b64e1-40d2-47cd-a555-0e23495e2443-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.400250 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/34840dce-2cd9-4cf3-81cc-be2fb6e08993-rbac\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: E1212 16:02:35.400710 4693 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Dec 12 16:02:35 crc kubenswrapper[4693]: E1212 16:02:35.400779 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/752b64e1-40d2-47cd-a555-0e23495e2443-tls-secret podName:752b64e1-40d2-47cd-a555-0e23495e2443 nodeName:}" failed. No retries permitted until 2025-12-12 16:02:35.900760291 +0000 UTC m=+983.069399892 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/752b64e1-40d2-47cd-a555-0e23495e2443-tls-secret") pod "logging-loki-gateway-5665b75b44-jstfj" (UID: "752b64e1-40d2-47cd-a555-0e23495e2443") : secret "logging-loki-gateway-http" not found Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.401169 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/752b64e1-40d2-47cd-a555-0e23495e2443-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.401198 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34840dce-2cd9-4cf3-81cc-be2fb6e08993-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.401685 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34840dce-2cd9-4cf3-81cc-be2fb6e08993-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.403238 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/752b64e1-40d2-47cd-a555-0e23495e2443-rbac\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.406497 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/752b64e1-40d2-47cd-a555-0e23495e2443-lokistack-gateway\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.415098 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/752b64e1-40d2-47cd-a555-0e23495e2443-tenants\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.415633 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/752b64e1-40d2-47cd-a555-0e23495e2443-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.416312 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/34840dce-2cd9-4cf3-81cc-be2fb6e08993-lokistack-gateway\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.418832 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/34840dce-2cd9-4cf3-81cc-be2fb6e08993-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.418899 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/34840dce-2cd9-4cf3-81cc-be2fb6e08993-tenants\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.420520 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vndv2\" (UniqueName: \"kubernetes.io/projected/752b64e1-40d2-47cd-a555-0e23495e2443-kube-api-access-vndv2\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.425390 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm22b\" (UniqueName: \"kubernetes.io/projected/34840dce-2cd9-4cf3-81cc-be2fb6e08993-kube-api-access-gm22b\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.455520 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.570736 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-gcwwx"] Dec 12 16:02:35 crc kubenswrapper[4693]: W1212 16:02:35.575614 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb89219b7_2b92_44d8_897c_beb5ef9d6861.slice/crio-875604ca759aa759633123b20ebef892ab7de95308f0bd250d2849a93a358516 WatchSource:0}: Error finding container 875604ca759aa759633123b20ebef892ab7de95308f0bd250d2849a93a358516: Status 404 returned error can't find the container with id 875604ca759aa759633123b20ebef892ab7de95308f0bd250d2849a93a358516 Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.698241 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk"] Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.727992 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw"] Dec 12 16:02:35 crc kubenswrapper[4693]: W1212 16:02:35.729920 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2691b993_4939_4cf2_84ab_1d34ea3dded9.slice/crio-a7b73be45f71c5fcd6e50de323e8ccd8f51560a4ada8f32ccbf111a4f54d4aad WatchSource:0}: Error finding container a7b73be45f71c5fcd6e50de323e8ccd8f51560a4ada8f32ccbf111a4f54d4aad: Status 404 returned error can't find the container with id a7b73be45f71c5fcd6e50de323e8ccd8f51560a4ada8f32ccbf111a4f54d4aad Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.907152 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/752b64e1-40d2-47cd-a555-0e23495e2443-tls-secret\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.907250 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/34840dce-2cd9-4cf3-81cc-be2fb6e08993-tls-secret\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.913260 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/752b64e1-40d2-47cd-a555-0e23495e2443-tls-secret\") pod \"logging-loki-gateway-5665b75b44-jstfj\" (UID: \"752b64e1-40d2-47cd-a555-0e23495e2443\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.913415 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/34840dce-2cd9-4cf3-81cc-be2fb6e08993-tls-secret\") pod \"logging-loki-gateway-5665b75b44-6fjrq\" (UID: \"34840dce-2cd9-4cf3-81cc-be2fb6e08993\") " pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.914681 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:35 crc kubenswrapper[4693]: I1212 16:02:35.935832 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.036652 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.037721 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.040261 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.040764 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.050706 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.122690 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.123777 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.134392 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.134635 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.140000 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.203091 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.204173 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.206290 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.206395 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.211774 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8c936c1-e205-423b-960f-af3894b107df-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.211829 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/c8c936c1-e205-423b-960f-af3894b107df-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.211948 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-de4c855c-2746-4dd6-bed9-f7ea60569432\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-de4c855c-2746-4dd6-bed9-f7ea60569432\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.211991 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/c8c936c1-e205-423b-960f-af3894b107df-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.212023 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c936c1-e205-423b-960f-af3894b107df-config\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.212073 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c8c936c1-e205-423b-960f-af3894b107df-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.212087 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x855k\" (UniqueName: \"kubernetes.io/projected/c8c936c1-e205-423b-960f-af3894b107df-kube-api-access-x855k\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.212132 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-352a211f-5faf-4b66-974b-a761bf7175f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-352a211f-5faf-4b66-974b-a761bf7175f6\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.213515 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.229984 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5665b75b44-6fjrq"] Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.313840 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/c8c936c1-e205-423b-960f-af3894b107df-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.313899 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da093088-1eba-4749-954b-c87347466fa9-config\") pod \"logging-loki-compactor-0\" (UID: \"da093088-1eba-4749-954b-c87347466fa9\") " pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.313925 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54ae60ed-431d-4995-a2f7-564738343760-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"54ae60ed-431d-4995-a2f7-564738343760\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.313941 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c8c936c1-e205-423b-960f-af3894b107df-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.313959 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/da093088-1eba-4749-954b-c87347466fa9-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"da093088-1eba-4749-954b-c87347466fa9\") " pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.313978 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da093088-1eba-4749-954b-c87347466fa9-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"da093088-1eba-4749-954b-c87347466fa9\") " pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.313997 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-43155a75-09f3-4ebf-9e2a-2ff1d0c51cfc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-43155a75-09f3-4ebf-9e2a-2ff1d0c51cfc\") pod \"logging-loki-compactor-0\" (UID: \"da093088-1eba-4749-954b-c87347466fa9\") " pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.314026 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-352a211f-5faf-4b66-974b-a761bf7175f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-352a211f-5faf-4b66-974b-a761bf7175f6\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.314047 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/c8c936c1-e205-423b-960f-af3894b107df-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.314062 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/da093088-1eba-4749-954b-c87347466fa9-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"da093088-1eba-4749-954b-c87347466fa9\") " pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.314078 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p88mb\" (UniqueName: \"kubernetes.io/projected/da093088-1eba-4749-954b-c87347466fa9-kube-api-access-p88mb\") pod \"logging-loki-compactor-0\" (UID: \"da093088-1eba-4749-954b-c87347466fa9\") " pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.314108 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c936c1-e205-423b-960f-af3894b107df-config\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.314129 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/54ae60ed-431d-4995-a2f7-564738343760-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"54ae60ed-431d-4995-a2f7-564738343760\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.314157 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x855k\" (UniqueName: \"kubernetes.io/projected/c8c936c1-e205-423b-960f-af3894b107df-kube-api-access-x855k\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.314185 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/54ae60ed-431d-4995-a2f7-564738343760-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"54ae60ed-431d-4995-a2f7-564738343760\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.314204 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8c936c1-e205-423b-960f-af3894b107df-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.314224 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-959a5d8e-49ec-4962-81fc-25f7f4f31ce4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-959a5d8e-49ec-4962-81fc-25f7f4f31ce4\") pod \"logging-loki-index-gateway-0\" (UID: \"54ae60ed-431d-4995-a2f7-564738343760\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.314241 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54ae60ed-431d-4995-a2f7-564738343760-config\") pod \"logging-loki-index-gateway-0\" (UID: \"54ae60ed-431d-4995-a2f7-564738343760\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.314257 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/54ae60ed-431d-4995-a2f7-564738343760-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"54ae60ed-431d-4995-a2f7-564738343760\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.314285 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/da093088-1eba-4749-954b-c87347466fa9-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"da093088-1eba-4749-954b-c87347466fa9\") " pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.314306 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-de4c855c-2746-4dd6-bed9-f7ea60569432\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-de4c855c-2746-4dd6-bed9-f7ea60569432\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.314324 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6sc6\" (UniqueName: \"kubernetes.io/projected/54ae60ed-431d-4995-a2f7-564738343760-kube-api-access-z6sc6\") pod \"logging-loki-index-gateway-0\" (UID: \"54ae60ed-431d-4995-a2f7-564738343760\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.316413 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c936c1-e205-423b-960f-af3894b107df-config\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.316528 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8c936c1-e205-423b-960f-af3894b107df-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.318217 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/c8c936c1-e205-423b-960f-af3894b107df-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.318384 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c8c936c1-e205-423b-960f-af3894b107df-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.319079 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.319108 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-de4c855c-2746-4dd6-bed9-f7ea60569432\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-de4c855c-2746-4dd6-bed9-f7ea60569432\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e2344ccdc626c09e23e4e66c8f546e8747babb4017b4c511812bbe1c87b57655/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.319086 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.319194 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-352a211f-5faf-4b66-974b-a761bf7175f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-352a211f-5faf-4b66-974b-a761bf7175f6\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4aae647ed0546997a0ddb5b6cc888ae8ee08ecc45793069aae79b9087c277327/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.324530 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/c8c936c1-e205-423b-960f-af3894b107df-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.336199 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x855k\" (UniqueName: \"kubernetes.io/projected/c8c936c1-e205-423b-960f-af3894b107df-kube-api-access-x855k\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.350591 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-352a211f-5faf-4b66-974b-a761bf7175f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-352a211f-5faf-4b66-974b-a761bf7175f6\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.354038 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-de4c855c-2746-4dd6-bed9-f7ea60569432\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-de4c855c-2746-4dd6-bed9-f7ea60569432\") pod \"logging-loki-ingester-0\" (UID: \"c8c936c1-e205-423b-960f-af3894b107df\") " pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.363549 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.397556 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5665b75b44-jstfj"] Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.415940 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-43155a75-09f3-4ebf-9e2a-2ff1d0c51cfc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-43155a75-09f3-4ebf-9e2a-2ff1d0c51cfc\") pod \"logging-loki-compactor-0\" (UID: \"da093088-1eba-4749-954b-c87347466fa9\") " pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.416175 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/da093088-1eba-4749-954b-c87347466fa9-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"da093088-1eba-4749-954b-c87347466fa9\") " pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.416328 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p88mb\" (UniqueName: \"kubernetes.io/projected/da093088-1eba-4749-954b-c87347466fa9-kube-api-access-p88mb\") pod \"logging-loki-compactor-0\" (UID: \"da093088-1eba-4749-954b-c87347466fa9\") " pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.416460 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/54ae60ed-431d-4995-a2f7-564738343760-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"54ae60ed-431d-4995-a2f7-564738343760\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.416597 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/54ae60ed-431d-4995-a2f7-564738343760-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"54ae60ed-431d-4995-a2f7-564738343760\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.416717 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-959a5d8e-49ec-4962-81fc-25f7f4f31ce4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-959a5d8e-49ec-4962-81fc-25f7f4f31ce4\") pod \"logging-loki-index-gateway-0\" (UID: \"54ae60ed-431d-4995-a2f7-564738343760\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.416869 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54ae60ed-431d-4995-a2f7-564738343760-config\") pod \"logging-loki-index-gateway-0\" (UID: \"54ae60ed-431d-4995-a2f7-564738343760\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.417074 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/54ae60ed-431d-4995-a2f7-564738343760-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"54ae60ed-431d-4995-a2f7-564738343760\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.417240 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/da093088-1eba-4749-954b-c87347466fa9-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"da093088-1eba-4749-954b-c87347466fa9\") " pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.417409 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6sc6\" (UniqueName: \"kubernetes.io/projected/54ae60ed-431d-4995-a2f7-564738343760-kube-api-access-z6sc6\") pod \"logging-loki-index-gateway-0\" (UID: \"54ae60ed-431d-4995-a2f7-564738343760\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.417588 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da093088-1eba-4749-954b-c87347466fa9-config\") pod \"logging-loki-compactor-0\" (UID: \"da093088-1eba-4749-954b-c87347466fa9\") " pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.417755 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54ae60ed-431d-4995-a2f7-564738343760-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"54ae60ed-431d-4995-a2f7-564738343760\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.417919 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/da093088-1eba-4749-954b-c87347466fa9-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"da093088-1eba-4749-954b-c87347466fa9\") " pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.417997 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54ae60ed-431d-4995-a2f7-564738343760-config\") pod \"logging-loki-index-gateway-0\" (UID: \"54ae60ed-431d-4995-a2f7-564738343760\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.418099 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da093088-1eba-4749-954b-c87347466fa9-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"da093088-1eba-4749-954b-c87347466fa9\") " pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.418875 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54ae60ed-431d-4995-a2f7-564738343760-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"54ae60ed-431d-4995-a2f7-564738343760\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.419891 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da093088-1eba-4749-954b-c87347466fa9-config\") pod \"logging-loki-compactor-0\" (UID: \"da093088-1eba-4749-954b-c87347466fa9\") " pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.420013 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.420062 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-43155a75-09f3-4ebf-9e2a-2ff1d0c51cfc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-43155a75-09f3-4ebf-9e2a-2ff1d0c51cfc\") pod \"logging-loki-compactor-0\" (UID: \"da093088-1eba-4749-954b-c87347466fa9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4d7bdd77a608e6c9d422a8e26420f493c5df9baac0dec21bcd21b2ad7458c42f/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.420653 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.420684 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-959a5d8e-49ec-4962-81fc-25f7f4f31ce4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-959a5d8e-49ec-4962-81fc-25f7f4f31ce4\") pod \"logging-loki-index-gateway-0\" (UID: \"54ae60ed-431d-4995-a2f7-564738343760\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/598ba5a5f4d2f19e388ce7da10c8045ef40bcf5f164f2218ababa422dbb1c023/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.422095 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/54ae60ed-431d-4995-a2f7-564738343760-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"54ae60ed-431d-4995-a2f7-564738343760\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.423343 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/da093088-1eba-4749-954b-c87347466fa9-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"da093088-1eba-4749-954b-c87347466fa9\") " pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.423958 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/54ae60ed-431d-4995-a2f7-564738343760-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"54ae60ed-431d-4995-a2f7-564738343760\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.424448 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/54ae60ed-431d-4995-a2f7-564738343760-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"54ae60ed-431d-4995-a2f7-564738343760\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.433867 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/da093088-1eba-4749-954b-c87347466fa9-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"da093088-1eba-4749-954b-c87347466fa9\") " pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.435109 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/da093088-1eba-4749-954b-c87347466fa9-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"da093088-1eba-4749-954b-c87347466fa9\") " pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.436631 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6sc6\" (UniqueName: \"kubernetes.io/projected/54ae60ed-431d-4995-a2f7-564738343760-kube-api-access-z6sc6\") pod \"logging-loki-index-gateway-0\" (UID: \"54ae60ed-431d-4995-a2f7-564738343760\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.437020 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p88mb\" (UniqueName: \"kubernetes.io/projected/da093088-1eba-4749-954b-c87347466fa9-kube-api-access-p88mb\") pod \"logging-loki-compactor-0\" (UID: \"da093088-1eba-4749-954b-c87347466fa9\") " pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.439798 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da093088-1eba-4749-954b-c87347466fa9-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"da093088-1eba-4749-954b-c87347466fa9\") " pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.454558 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" event={"ID":"b89219b7-2b92-44d8-897c-beb5ef9d6861","Type":"ContainerStarted","Data":"875604ca759aa759633123b20ebef892ab7de95308f0bd250d2849a93a358516"} Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.457754 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" event={"ID":"fee116a3-18ff-4755-b34f-82baa25eeefd","Type":"ContainerStarted","Data":"138601d906696655993a997f9c4f701d7f8920b218205f3c3f0b7badea8f00f2"} Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.458844 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" event={"ID":"2691b993-4939-4cf2-84ab-1d34ea3dded9","Type":"ContainerStarted","Data":"a7b73be45f71c5fcd6e50de323e8ccd8f51560a4ada8f32ccbf111a4f54d4aad"} Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.459592 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-959a5d8e-49ec-4962-81fc-25f7f4f31ce4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-959a5d8e-49ec-4962-81fc-25f7f4f31ce4\") pod \"logging-loki-index-gateway-0\" (UID: \"54ae60ed-431d-4995-a2f7-564738343760\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.460190 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" event={"ID":"752b64e1-40d2-47cd-a555-0e23495e2443","Type":"ContainerStarted","Data":"24153f04982d74bb6a224ced2d0ad79c8c7dd90ea1d7343667950e4079ea6e57"} Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.462105 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" event={"ID":"34840dce-2cd9-4cf3-81cc-be2fb6e08993","Type":"ContainerStarted","Data":"137e8a0d45c8d6177cd9fca6c341889165e2d610aff24ee8a629023a7a0331e4"} Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.465145 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-43155a75-09f3-4ebf-9e2a-2ff1d0c51cfc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-43155a75-09f3-4ebf-9e2a-2ff1d0c51cfc\") pod \"logging-loki-compactor-0\" (UID: \"da093088-1eba-4749-954b-c87347466fa9\") " pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.520886 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.760652 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.796722 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.931551 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Dec 12 16:02:36 crc kubenswrapper[4693]: I1212 16:02:36.981561 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Dec 12 16:02:36 crc kubenswrapper[4693]: W1212 16:02:36.985246 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda093088_1eba_4749_954b_c87347466fa9.slice/crio-35d9dd32170f45a98b619a12621590da64c1e04df1416e38ba3d0da4ff46625f WatchSource:0}: Error finding container 35d9dd32170f45a98b619a12621590da64c1e04df1416e38ba3d0da4ff46625f: Status 404 returned error can't find the container with id 35d9dd32170f45a98b619a12621590da64c1e04df1416e38ba3d0da4ff46625f Dec 12 16:02:37 crc kubenswrapper[4693]: I1212 16:02:37.468380 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"da093088-1eba-4749-954b-c87347466fa9","Type":"ContainerStarted","Data":"35d9dd32170f45a98b619a12621590da64c1e04df1416e38ba3d0da4ff46625f"} Dec 12 16:02:37 crc kubenswrapper[4693]: I1212 16:02:37.469264 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"54ae60ed-431d-4995-a2f7-564738343760","Type":"ContainerStarted","Data":"7d1e5aba2638d216aae57ee3cafbdeb5b91fce6135ef52e18597fe2321e2008e"} Dec 12 16:02:37 crc kubenswrapper[4693]: I1212 16:02:37.469926 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"c8c936c1-e205-423b-960f-af3894b107df","Type":"ContainerStarted","Data":"edb1db0ff32f413b3d969da781dc638cbac58919daaef58f7656a96f9b8faf5c"} Dec 12 16:02:42 crc kubenswrapper[4693]: I1212 16:02:42.530242 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:02:42 crc kubenswrapper[4693]: I1212 16:02:42.530863 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:02:46 crc kubenswrapper[4693]: I1212 16:02:46.536873 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" event={"ID":"34840dce-2cd9-4cf3-81cc-be2fb6e08993","Type":"ContainerStarted","Data":"f6d671a7e852ec06bae3c034e492af9a432f8ad1d75d9a3e8097f2f9254cffe6"} Dec 12 16:02:46 crc kubenswrapper[4693]: I1212 16:02:46.538369 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" event={"ID":"fee116a3-18ff-4755-b34f-82baa25eeefd","Type":"ContainerStarted","Data":"e308335fd932b58e65f2e9d9a1a7ccca3c48e6fb9d4e54023859a6479e3491dd"} Dec 12 16:02:46 crc kubenswrapper[4693]: I1212 16:02:46.538476 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" Dec 12 16:02:46 crc kubenswrapper[4693]: I1212 16:02:46.540702 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"54ae60ed-431d-4995-a2f7-564738343760","Type":"ContainerStarted","Data":"4ee011451201d3dfa693b54b568c27e83705dca066eb047d4e72c0a3e9168f17"} Dec 12 16:02:46 crc kubenswrapper[4693]: I1212 16:02:46.540749 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:02:46 crc kubenswrapper[4693]: I1212 16:02:46.542667 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"da093088-1eba-4749-954b-c87347466fa9","Type":"ContainerStarted","Data":"fb12630845e95a77a078675e18d153095720529f43f84135dc947195a2791186"} Dec 12 16:02:46 crc kubenswrapper[4693]: I1212 16:02:46.542788 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:02:46 crc kubenswrapper[4693]: I1212 16:02:46.544380 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"c8c936c1-e205-423b-960f-af3894b107df","Type":"ContainerStarted","Data":"103f7147cad57de5fd431dcfeb1ac401a125cb7d0ee46f1d285148c1df914faa"} Dec 12 16:02:46 crc kubenswrapper[4693]: I1212 16:02:46.544445 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:02:46 crc kubenswrapper[4693]: I1212 16:02:46.546171 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" event={"ID":"b89219b7-2b92-44d8-897c-beb5ef9d6861","Type":"ContainerStarted","Data":"9f94de3c53b28dbe332cabfb8ff54583229ebfb1fc0425e8289469fc230cb100"} Dec 12 16:02:46 crc kubenswrapper[4693]: I1212 16:02:46.546364 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 16:02:46 crc kubenswrapper[4693]: I1212 16:02:46.547844 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" event={"ID":"2691b993-4939-4cf2-84ab-1d34ea3dded9","Type":"ContainerStarted","Data":"8916d81b3c628f0eed57a340109b815a8e1e9c79596e86d217103fe454dce0b4"} Dec 12 16:02:46 crc kubenswrapper[4693]: I1212 16:02:46.547992 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" Dec 12 16:02:46 crc kubenswrapper[4693]: I1212 16:02:46.549670 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" event={"ID":"752b64e1-40d2-47cd-a555-0e23495e2443","Type":"ContainerStarted","Data":"26c24fc4644743cc9fe9012c9c535c72f14698c2c17fc8b689e42b4bac186a62"} Dec 12 16:02:46 crc kubenswrapper[4693]: I1212 16:02:46.564846 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" podStartSLOduration=2.284968901 podStartE2EDuration="12.564824732s" podCreationTimestamp="2025-12-12 16:02:34 +0000 UTC" firstStartedPulling="2025-12-12 16:02:35.697754102 +0000 UTC m=+982.866393703" lastFinishedPulling="2025-12-12 16:02:45.977609913 +0000 UTC m=+993.146249534" observedRunningTime="2025-12-12 16:02:46.559229581 +0000 UTC m=+993.727869182" watchObservedRunningTime="2025-12-12 16:02:46.564824732 +0000 UTC m=+993.733464343" Dec 12 16:02:46 crc kubenswrapper[4693]: I1212 16:02:46.588683 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" podStartSLOduration=1.634469167 podStartE2EDuration="11.588663174s" podCreationTimestamp="2025-12-12 16:02:35 +0000 UTC" firstStartedPulling="2025-12-12 16:02:35.581250774 +0000 UTC m=+982.749890375" lastFinishedPulling="2025-12-12 16:02:45.535444781 +0000 UTC m=+992.704084382" observedRunningTime="2025-12-12 16:02:46.587485772 +0000 UTC m=+993.756125373" watchObservedRunningTime="2025-12-12 16:02:46.588663174 +0000 UTC m=+993.757302775" Dec 12 16:02:46 crc kubenswrapper[4693]: I1212 16:02:46.604680 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=2.749788822 podStartE2EDuration="11.604659955s" podCreationTimestamp="2025-12-12 16:02:35 +0000 UTC" firstStartedPulling="2025-12-12 16:02:36.819833349 +0000 UTC m=+983.988472950" lastFinishedPulling="2025-12-12 16:02:45.674704472 +0000 UTC m=+992.843344083" observedRunningTime="2025-12-12 16:02:46.602325622 +0000 UTC m=+993.770965313" watchObservedRunningTime="2025-12-12 16:02:46.604659955 +0000 UTC m=+993.773299556" Dec 12 16:02:46 crc kubenswrapper[4693]: I1212 16:02:46.623160 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=2.557097901 podStartE2EDuration="11.623138573s" podCreationTimestamp="2025-12-12 16:02:35 +0000 UTC" firstStartedPulling="2025-12-12 16:02:36.934642982 +0000 UTC m=+984.103282583" lastFinishedPulling="2025-12-12 16:02:46.000683624 +0000 UTC m=+993.169323255" observedRunningTime="2025-12-12 16:02:46.62231058 +0000 UTC m=+993.790950181" watchObservedRunningTime="2025-12-12 16:02:46.623138573 +0000 UTC m=+993.791778174" Dec 12 16:02:46 crc kubenswrapper[4693]: I1212 16:02:46.659213 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" podStartSLOduration=1.362379587 podStartE2EDuration="11.659196004s" podCreationTimestamp="2025-12-12 16:02:35 +0000 UTC" firstStartedPulling="2025-12-12 16:02:35.732612251 +0000 UTC m=+982.901251852" lastFinishedPulling="2025-12-12 16:02:46.029428668 +0000 UTC m=+993.198068269" observedRunningTime="2025-12-12 16:02:46.642548916 +0000 UTC m=+993.811188517" watchObservedRunningTime="2025-12-12 16:02:46.659196004 +0000 UTC m=+993.827835605" Dec 12 16:02:46 crc kubenswrapper[4693]: I1212 16:02:46.659624 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=2.643710653 podStartE2EDuration="11.659617635s" podCreationTimestamp="2025-12-12 16:02:35 +0000 UTC" firstStartedPulling="2025-12-12 16:02:36.987101505 +0000 UTC m=+984.155741096" lastFinishedPulling="2025-12-12 16:02:46.003008457 +0000 UTC m=+993.171648078" observedRunningTime="2025-12-12 16:02:46.657313783 +0000 UTC m=+993.825953394" watchObservedRunningTime="2025-12-12 16:02:46.659617635 +0000 UTC m=+993.828257236" Dec 12 16:02:50 crc kubenswrapper[4693]: I1212 16:02:50.580094 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" event={"ID":"752b64e1-40d2-47cd-a555-0e23495e2443","Type":"ContainerStarted","Data":"125c76494ee07a84cfdb888de6f2f5298c988dc761fd10b0eb0a8256d467651c"} Dec 12 16:02:50 crc kubenswrapper[4693]: I1212 16:02:50.582488 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" event={"ID":"34840dce-2cd9-4cf3-81cc-be2fb6e08993","Type":"ContainerStarted","Data":"6ae977714cbd21c378ddd53fd447c561a34bc11fd5844d3793eae5475759f19e"} Dec 12 16:02:51 crc kubenswrapper[4693]: I1212 16:02:51.590314 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:51 crc kubenswrapper[4693]: I1212 16:02:51.590356 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:51 crc kubenswrapper[4693]: I1212 16:02:51.590370 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:51 crc kubenswrapper[4693]: I1212 16:02:51.598679 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:51 crc kubenswrapper[4693]: I1212 16:02:51.608183 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" Dec 12 16:02:51 crc kubenswrapper[4693]: I1212 16:02:51.608466 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:51 crc kubenswrapper[4693]: I1212 16:02:51.621399 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" podStartSLOduration=2.762444343 podStartE2EDuration="16.62136724s" podCreationTimestamp="2025-12-12 16:02:35 +0000 UTC" firstStartedPulling="2025-12-12 16:02:36.405468718 +0000 UTC m=+983.574108329" lastFinishedPulling="2025-12-12 16:02:50.264391625 +0000 UTC m=+997.433031226" observedRunningTime="2025-12-12 16:02:51.60984485 +0000 UTC m=+998.778484531" watchObservedRunningTime="2025-12-12 16:02:51.62136724 +0000 UTC m=+998.790006881" Dec 12 16:02:51 crc kubenswrapper[4693]: I1212 16:02:51.649633 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" podStartSLOduration=2.611097076 podStartE2EDuration="16.649611321s" podCreationTimestamp="2025-12-12 16:02:35 +0000 UTC" firstStartedPulling="2025-12-12 16:02:36.235474318 +0000 UTC m=+983.404113919" lastFinishedPulling="2025-12-12 16:02:50.273988563 +0000 UTC m=+997.442628164" observedRunningTime="2025-12-12 16:02:51.641069041 +0000 UTC m=+998.809708662" watchObservedRunningTime="2025-12-12 16:02:51.649611321 +0000 UTC m=+998.818250932" Dec 12 16:02:52 crc kubenswrapper[4693]: I1212 16:02:52.595516 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:02:52 crc kubenswrapper[4693]: I1212 16:02:52.601912 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" Dec 12 16:03:05 crc kubenswrapper[4693]: I1212 16:03:05.222312 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" Dec 12 16:03:05 crc kubenswrapper[4693]: I1212 16:03:05.366483 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 16:03:05 crc kubenswrapper[4693]: I1212 16:03:05.462727 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" Dec 12 16:03:06 crc kubenswrapper[4693]: I1212 16:03:06.372241 4693 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Dec 12 16:03:06 crc kubenswrapper[4693]: I1212 16:03:06.372625 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="c8c936c1-e205-423b-960f-af3894b107df" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 12 16:03:06 crc kubenswrapper[4693]: I1212 16:03:06.529230 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Dec 12 16:03:06 crc kubenswrapper[4693]: I1212 16:03:06.766870 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Dec 12 16:03:12 crc kubenswrapper[4693]: I1212 16:03:12.530830 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:03:12 crc kubenswrapper[4693]: I1212 16:03:12.531237 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:03:12 crc kubenswrapper[4693]: I1212 16:03:12.531333 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 16:03:12 crc kubenswrapper[4693]: I1212 16:03:12.532155 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f8076bbaf9c92a7134e9ae28b9eeeb9f0776e367f05e17a692efcc2523d8648"} pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:03:12 crc kubenswrapper[4693]: I1212 16:03:12.532226 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" containerID="cri-o://6f8076bbaf9c92a7134e9ae28b9eeeb9f0776e367f05e17a692efcc2523d8648" gracePeriod=600 Dec 12 16:03:13 crc kubenswrapper[4693]: I1212 16:03:13.765020 4693 generic.go:334] "Generic (PLEG): container finished" podID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerID="6f8076bbaf9c92a7134e9ae28b9eeeb9f0776e367f05e17a692efcc2523d8648" exitCode=0 Dec 12 16:03:13 crc kubenswrapper[4693]: I1212 16:03:13.765088 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerDied","Data":"6f8076bbaf9c92a7134e9ae28b9eeeb9f0776e367f05e17a692efcc2523d8648"} Dec 12 16:03:13 crc kubenswrapper[4693]: I1212 16:03:13.765352 4693 scope.go:117] "RemoveContainer" containerID="74051a73f37429f331f62089999985291d71febca1fcfa63c6624e65d5235174" Dec 12 16:03:15 crc kubenswrapper[4693]: I1212 16:03:15.778493 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerStarted","Data":"9b99609eca8bf887c0f086d452cd1f8437812e8c5e6edb0ab2c3a059f6382847"} Dec 12 16:03:16 crc kubenswrapper[4693]: I1212 16:03:16.369139 4693 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Dec 12 16:03:16 crc kubenswrapper[4693]: I1212 16:03:16.369447 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="c8c936c1-e205-423b-960f-af3894b107df" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 12 16:03:26 crc kubenswrapper[4693]: I1212 16:03:26.370550 4693 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Dec 12 16:03:26 crc kubenswrapper[4693]: I1212 16:03:26.371509 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="c8c936c1-e205-423b-960f-af3894b107df" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 12 16:03:36 crc kubenswrapper[4693]: I1212 16:03:36.369063 4693 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Dec 12 16:03:36 crc kubenswrapper[4693]: I1212 16:03:36.369598 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="c8c936c1-e205-423b-960f-af3894b107df" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 12 16:03:46 crc kubenswrapper[4693]: I1212 16:03:46.370304 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.275716 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-dv7wq"] Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.277419 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.291205 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.293157 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.293607 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.294225 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.294454 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-4fbdb" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.299772 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.314404 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-dv7wq"] Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.379648 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-dv7wq"] Dec 12 16:03:56 crc kubenswrapper[4693]: E1212 16:03:56.380314 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-rt4x5 metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-dv7wq" podUID="c2f77080-c7e1-4855-b7aa-2fb6be36ea3e" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.386310 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-entrypoint\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.386363 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt4x5\" (UniqueName: \"kubernetes.io/projected/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-kube-api-access-rt4x5\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.386560 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-trusted-ca\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.386620 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-config\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.386656 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-collector-syslog-receiver\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.386708 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-datadir\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.386725 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-metrics\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.386744 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-config-openshift-service-cacrt\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.386767 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-sa-token\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.386872 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-tmp\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.386908 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-collector-token\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.488730 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-config-openshift-service-cacrt\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.488801 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-sa-token\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.488852 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-tmp\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.488885 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-collector-token\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.488923 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-entrypoint\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.488946 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt4x5\" (UniqueName: \"kubernetes.io/projected/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-kube-api-access-rt4x5\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.488999 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-trusted-ca\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.489029 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-config\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.489057 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-collector-syslog-receiver\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.489112 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-datadir\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.489155 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-metrics\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.490593 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-config-openshift-service-cacrt\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: E1212 16:03:56.490735 4693 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Dec 12 16:03:56 crc kubenswrapper[4693]: E1212 16:03:56.490791 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-collector-syslog-receiver podName:c2f77080-c7e1-4855-b7aa-2fb6be36ea3e nodeName:}" failed. No retries permitted until 2025-12-12 16:03:56.990772873 +0000 UTC m=+1064.159412544 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-collector-syslog-receiver") pod "collector-dv7wq" (UID: "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e") : secret "collector-syslog-receiver" not found Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.491060 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-datadir\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.491179 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-config\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.491466 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-trusted-ca\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.491802 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-entrypoint\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.496556 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-collector-token\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.496985 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-metrics\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.505561 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-tmp\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.514204 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-sa-token\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.519295 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt4x5\" (UniqueName: \"kubernetes.io/projected/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-kube-api-access-rt4x5\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:56 crc kubenswrapper[4693]: I1212 16:03:56.997591 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-collector-syslog-receiver\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.000963 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-collector-syslog-receiver\") pod \"collector-dv7wq\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " pod="openshift-logging/collector-dv7wq" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.079447 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-dv7wq" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.089854 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-dv7wq" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.199932 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-sa-token\") pod \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.200016 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-collector-token\") pod \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.200057 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-trusted-ca\") pod \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.200076 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-config\") pod \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.200097 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-config-openshift-service-cacrt\") pod \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.200114 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt4x5\" (UniqueName: \"kubernetes.io/projected/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-kube-api-access-rt4x5\") pod \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.200138 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-entrypoint\") pod \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.200176 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-tmp\") pod \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.200193 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-collector-syslog-receiver\") pod \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.200234 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-datadir\") pod \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.200247 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-metrics\") pod \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\" (UID: \"c2f77080-c7e1-4855-b7aa-2fb6be36ea3e\") " Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.201331 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e" (UID: "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.201802 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e" (UID: "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.202189 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-config" (OuterVolumeSpecName: "config") pod "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e" (UID: "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.202804 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-datadir" (OuterVolumeSpecName: "datadir") pod "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e" (UID: "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.203055 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e" (UID: "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.203606 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-metrics" (OuterVolumeSpecName: "metrics") pod "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e" (UID: "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.204902 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e" (UID: "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.205641 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-kube-api-access-rt4x5" (OuterVolumeSpecName: "kube-api-access-rt4x5") pod "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e" (UID: "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e"). InnerVolumeSpecName "kube-api-access-rt4x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.206549 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-collector-token" (OuterVolumeSpecName: "collector-token") pod "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e" (UID: "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.207738 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-sa-token" (OuterVolumeSpecName: "sa-token") pod "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e" (UID: "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.209786 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-tmp" (OuterVolumeSpecName: "tmp") pod "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e" (UID: "c2f77080-c7e1-4855-b7aa-2fb6be36ea3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.302229 4693 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.303209 4693 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-collector-token\") on node \"crc\" DevicePath \"\"" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.303356 4693 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.303467 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.303556 4693 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.303620 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt4x5\" (UniqueName: \"kubernetes.io/projected/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-kube-api-access-rt4x5\") on node \"crc\" DevicePath \"\"" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.303685 4693 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-entrypoint\") on node \"crc\" DevicePath \"\"" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.303741 4693 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.303798 4693 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.303856 4693 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-datadir\") on node \"crc\" DevicePath \"\"" Dec 12 16:03:57 crc kubenswrapper[4693]: I1212 16:03:57.303946 4693 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e-metrics\") on node \"crc\" DevicePath \"\"" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.086648 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-dv7wq" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.127113 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-dv7wq"] Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.135868 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-dv7wq"] Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.167302 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-6slps"] Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.169376 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.173912 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.174979 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-6slps"] Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.175946 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.176652 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.176962 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.177137 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-4fbdb" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.185784 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.320539 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a275e891-4084-4a2c-914f-0827a75e7906-metrics\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.320663 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a275e891-4084-4a2c-914f-0827a75e7906-datadir\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.320738 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a275e891-4084-4a2c-914f-0827a75e7906-collector-token\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.320764 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a275e891-4084-4a2c-914f-0827a75e7906-config\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.320789 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a275e891-4084-4a2c-914f-0827a75e7906-sa-token\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.320824 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a275e891-4084-4a2c-914f-0827a75e7906-collector-syslog-receiver\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.320856 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a275e891-4084-4a2c-914f-0827a75e7906-entrypoint\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.320888 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a275e891-4084-4a2c-914f-0827a75e7906-trusted-ca\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.320919 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a275e891-4084-4a2c-914f-0827a75e7906-config-openshift-service-cacrt\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.320953 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx9xf\" (UniqueName: \"kubernetes.io/projected/a275e891-4084-4a2c-914f-0827a75e7906-kube-api-access-jx9xf\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.320976 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a275e891-4084-4a2c-914f-0827a75e7906-tmp\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.422198 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a275e891-4084-4a2c-914f-0827a75e7906-datadir\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.422665 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a275e891-4084-4a2c-914f-0827a75e7906-collector-token\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.422376 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a275e891-4084-4a2c-914f-0827a75e7906-datadir\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.422743 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a275e891-4084-4a2c-914f-0827a75e7906-config\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.422781 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a275e891-4084-4a2c-914f-0827a75e7906-sa-token\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.422822 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a275e891-4084-4a2c-914f-0827a75e7906-collector-syslog-receiver\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.422844 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a275e891-4084-4a2c-914f-0827a75e7906-entrypoint\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.422877 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a275e891-4084-4a2c-914f-0827a75e7906-trusted-ca\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.422914 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a275e891-4084-4a2c-914f-0827a75e7906-config-openshift-service-cacrt\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.422953 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx9xf\" (UniqueName: \"kubernetes.io/projected/a275e891-4084-4a2c-914f-0827a75e7906-kube-api-access-jx9xf\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.422978 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a275e891-4084-4a2c-914f-0827a75e7906-tmp\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.423007 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a275e891-4084-4a2c-914f-0827a75e7906-metrics\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.423781 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a275e891-4084-4a2c-914f-0827a75e7906-config-openshift-service-cacrt\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.424057 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a275e891-4084-4a2c-914f-0827a75e7906-trusted-ca\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.424093 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a275e891-4084-4a2c-914f-0827a75e7906-config\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.424327 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a275e891-4084-4a2c-914f-0827a75e7906-entrypoint\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.426952 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a275e891-4084-4a2c-914f-0827a75e7906-metrics\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.427686 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a275e891-4084-4a2c-914f-0827a75e7906-collector-token\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.428738 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a275e891-4084-4a2c-914f-0827a75e7906-tmp\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.429262 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a275e891-4084-4a2c-914f-0827a75e7906-collector-syslog-receiver\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.443780 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a275e891-4084-4a2c-914f-0827a75e7906-sa-token\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.443981 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx9xf\" (UniqueName: \"kubernetes.io/projected/a275e891-4084-4a2c-914f-0827a75e7906-kube-api-access-jx9xf\") pod \"collector-6slps\" (UID: \"a275e891-4084-4a2c-914f-0827a75e7906\") " pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.491425 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-6slps" Dec 12 16:03:58 crc kubenswrapper[4693]: I1212 16:03:58.913383 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-6slps"] Dec 12 16:03:59 crc kubenswrapper[4693]: I1212 16:03:59.096232 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-6slps" event={"ID":"a275e891-4084-4a2c-914f-0827a75e7906","Type":"ContainerStarted","Data":"86da365850d8c1ca10ea49a70dfb2f264a13a2158979218d41d0293a01c7918c"} Dec 12 16:03:59 crc kubenswrapper[4693]: I1212 16:03:59.366068 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2f77080-c7e1-4855-b7aa-2fb6be36ea3e" path="/var/lib/kubelet/pods/c2f77080-c7e1-4855-b7aa-2fb6be36ea3e/volumes" Dec 12 16:04:08 crc kubenswrapper[4693]: I1212 16:04:08.170004 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-6slps" event={"ID":"a275e891-4084-4a2c-914f-0827a75e7906","Type":"ContainerStarted","Data":"52b19b0c7513c51668bdcc7331b325804d1929d9d03d955333484f5063b11c68"} Dec 12 16:04:08 crc kubenswrapper[4693]: I1212 16:04:08.195385 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-6slps" podStartSLOduration=1.535267479 podStartE2EDuration="10.195362521s" podCreationTimestamp="2025-12-12 16:03:58 +0000 UTC" firstStartedPulling="2025-12-12 16:03:58.925750855 +0000 UTC m=+1066.094390466" lastFinishedPulling="2025-12-12 16:04:07.585845907 +0000 UTC m=+1074.754485508" observedRunningTime="2025-12-12 16:04:08.187878849 +0000 UTC m=+1075.356518490" watchObservedRunningTime="2025-12-12 16:04:08.195362521 +0000 UTC m=+1075.364002132" Dec 12 16:04:38 crc kubenswrapper[4693]: I1212 16:04:38.344555 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd"] Dec 12 16:04:38 crc kubenswrapper[4693]: I1212 16:04:38.348440 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd" Dec 12 16:04:38 crc kubenswrapper[4693]: I1212 16:04:38.351617 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 12 16:04:38 crc kubenswrapper[4693]: I1212 16:04:38.372739 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd"] Dec 12 16:04:38 crc kubenswrapper[4693]: I1212 16:04:38.518808 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a80047c-85a8-4930-824e-a6deb0957638-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd\" (UID: \"1a80047c-85a8-4930-824e-a6deb0957638\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd" Dec 12 16:04:38 crc kubenswrapper[4693]: I1212 16:04:38.518916 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a80047c-85a8-4930-824e-a6deb0957638-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd\" (UID: \"1a80047c-85a8-4930-824e-a6deb0957638\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd" Dec 12 16:04:38 crc kubenswrapper[4693]: I1212 16:04:38.518940 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4jrq\" (UniqueName: \"kubernetes.io/projected/1a80047c-85a8-4930-824e-a6deb0957638-kube-api-access-h4jrq\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd\" (UID: \"1a80047c-85a8-4930-824e-a6deb0957638\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd" Dec 12 16:04:38 crc kubenswrapper[4693]: I1212 16:04:38.621128 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a80047c-85a8-4930-824e-a6deb0957638-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd\" (UID: \"1a80047c-85a8-4930-824e-a6deb0957638\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd" Dec 12 16:04:38 crc kubenswrapper[4693]: I1212 16:04:38.621208 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a80047c-85a8-4930-824e-a6deb0957638-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd\" (UID: \"1a80047c-85a8-4930-824e-a6deb0957638\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd" Dec 12 16:04:38 crc kubenswrapper[4693]: I1212 16:04:38.621235 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4jrq\" (UniqueName: \"kubernetes.io/projected/1a80047c-85a8-4930-824e-a6deb0957638-kube-api-access-h4jrq\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd\" (UID: \"1a80047c-85a8-4930-824e-a6deb0957638\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd" Dec 12 16:04:38 crc kubenswrapper[4693]: I1212 16:04:38.621979 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a80047c-85a8-4930-824e-a6deb0957638-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd\" (UID: \"1a80047c-85a8-4930-824e-a6deb0957638\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd" Dec 12 16:04:38 crc kubenswrapper[4693]: I1212 16:04:38.622098 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a80047c-85a8-4930-824e-a6deb0957638-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd\" (UID: \"1a80047c-85a8-4930-824e-a6deb0957638\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd" Dec 12 16:04:38 crc kubenswrapper[4693]: I1212 16:04:38.641357 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4jrq\" (UniqueName: \"kubernetes.io/projected/1a80047c-85a8-4930-824e-a6deb0957638-kube-api-access-h4jrq\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd\" (UID: \"1a80047c-85a8-4930-824e-a6deb0957638\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd" Dec 12 16:04:38 crc kubenswrapper[4693]: I1212 16:04:38.665465 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd" Dec 12 16:04:39 crc kubenswrapper[4693]: I1212 16:04:39.082541 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd"] Dec 12 16:04:39 crc kubenswrapper[4693]: I1212 16:04:39.401365 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd" event={"ID":"1a80047c-85a8-4930-824e-a6deb0957638","Type":"ContainerStarted","Data":"98b6033cdafca9905bcf5b53ae662a745cf757aae387e8774b714fe3c18b61d3"} Dec 12 16:04:40 crc kubenswrapper[4693]: I1212 16:04:40.411369 4693 generic.go:334] "Generic (PLEG): container finished" podID="1a80047c-85a8-4930-824e-a6deb0957638" containerID="71ddfed3a879b0e0b39c2a7d02e271f6094f82091bb995f54829bc20f9df36ed" exitCode=0 Dec 12 16:04:40 crc kubenswrapper[4693]: I1212 16:04:40.411421 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd" event={"ID":"1a80047c-85a8-4930-824e-a6deb0957638","Type":"ContainerDied","Data":"71ddfed3a879b0e0b39c2a7d02e271f6094f82091bb995f54829bc20f9df36ed"} Dec 12 16:04:42 crc kubenswrapper[4693]: I1212 16:04:42.431314 4693 generic.go:334] "Generic (PLEG): container finished" podID="1a80047c-85a8-4930-824e-a6deb0957638" containerID="f3be8144c95c0e0fb2462936e3687892ff79dbac9f4f2043fd92f94642a2fb5a" exitCode=0 Dec 12 16:04:42 crc kubenswrapper[4693]: I1212 16:04:42.431409 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd" event={"ID":"1a80047c-85a8-4930-824e-a6deb0957638","Type":"ContainerDied","Data":"f3be8144c95c0e0fb2462936e3687892ff79dbac9f4f2043fd92f94642a2fb5a"} Dec 12 16:04:43 crc kubenswrapper[4693]: I1212 16:04:43.441534 4693 generic.go:334] "Generic (PLEG): container finished" podID="1a80047c-85a8-4930-824e-a6deb0957638" containerID="8925fc9429e00995050b2f40e76a80e4fd5b370c90d617af5a6f32eb4deccc60" exitCode=0 Dec 12 16:04:43 crc kubenswrapper[4693]: I1212 16:04:43.441639 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd" event={"ID":"1a80047c-85a8-4930-824e-a6deb0957638","Type":"ContainerDied","Data":"8925fc9429e00995050b2f40e76a80e4fd5b370c90d617af5a6f32eb4deccc60"} Dec 12 16:04:44 crc kubenswrapper[4693]: I1212 16:04:44.754215 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd" Dec 12 16:04:44 crc kubenswrapper[4693]: I1212 16:04:44.922718 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4jrq\" (UniqueName: \"kubernetes.io/projected/1a80047c-85a8-4930-824e-a6deb0957638-kube-api-access-h4jrq\") pod \"1a80047c-85a8-4930-824e-a6deb0957638\" (UID: \"1a80047c-85a8-4930-824e-a6deb0957638\") " Dec 12 16:04:44 crc kubenswrapper[4693]: I1212 16:04:44.922828 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a80047c-85a8-4930-824e-a6deb0957638-util\") pod \"1a80047c-85a8-4930-824e-a6deb0957638\" (UID: \"1a80047c-85a8-4930-824e-a6deb0957638\") " Dec 12 16:04:44 crc kubenswrapper[4693]: I1212 16:04:44.922880 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a80047c-85a8-4930-824e-a6deb0957638-bundle\") pod \"1a80047c-85a8-4930-824e-a6deb0957638\" (UID: \"1a80047c-85a8-4930-824e-a6deb0957638\") " Dec 12 16:04:44 crc kubenswrapper[4693]: I1212 16:04:44.923670 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a80047c-85a8-4930-824e-a6deb0957638-bundle" (OuterVolumeSpecName: "bundle") pod "1a80047c-85a8-4930-824e-a6deb0957638" (UID: "1a80047c-85a8-4930-824e-a6deb0957638"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:04:44 crc kubenswrapper[4693]: I1212 16:04:44.928293 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a80047c-85a8-4930-824e-a6deb0957638-kube-api-access-h4jrq" (OuterVolumeSpecName: "kube-api-access-h4jrq") pod "1a80047c-85a8-4930-824e-a6deb0957638" (UID: "1a80047c-85a8-4930-824e-a6deb0957638"). InnerVolumeSpecName "kube-api-access-h4jrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:04:44 crc kubenswrapper[4693]: I1212 16:04:44.939203 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a80047c-85a8-4930-824e-a6deb0957638-util" (OuterVolumeSpecName: "util") pod "1a80047c-85a8-4930-824e-a6deb0957638" (UID: "1a80047c-85a8-4930-824e-a6deb0957638"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:04:45 crc kubenswrapper[4693]: I1212 16:04:45.024568 4693 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a80047c-85a8-4930-824e-a6deb0957638-util\") on node \"crc\" DevicePath \"\"" Dec 12 16:04:45 crc kubenswrapper[4693]: I1212 16:04:45.024603 4693 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a80047c-85a8-4930-824e-a6deb0957638-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:04:45 crc kubenswrapper[4693]: I1212 16:04:45.024614 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4jrq\" (UniqueName: \"kubernetes.io/projected/1a80047c-85a8-4930-824e-a6deb0957638-kube-api-access-h4jrq\") on node \"crc\" DevicePath \"\"" Dec 12 16:04:45 crc kubenswrapper[4693]: I1212 16:04:45.456476 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd" event={"ID":"1a80047c-85a8-4930-824e-a6deb0957638","Type":"ContainerDied","Data":"98b6033cdafca9905bcf5b53ae662a745cf757aae387e8774b714fe3c18b61d3"} Dec 12 16:04:45 crc kubenswrapper[4693]: I1212 16:04:45.456520 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98b6033cdafca9905bcf5b53ae662a745cf757aae387e8774b714fe3c18b61d3" Dec 12 16:04:45 crc kubenswrapper[4693]: I1212 16:04:45.456491 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8zctnd" Dec 12 16:04:45 crc kubenswrapper[4693]: E1212 16:04:45.506813 4693 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a80047c_85a8_4930_824e_a6deb0957638.slice/crio-98b6033cdafca9905bcf5b53ae662a745cf757aae387e8774b714fe3c18b61d3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a80047c_85a8_4930_824e_a6deb0957638.slice\": RecentStats: unable to find data in memory cache]" Dec 12 16:04:50 crc kubenswrapper[4693]: I1212 16:04:50.637088 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-wzh4r"] Dec 12 16:04:50 crc kubenswrapper[4693]: E1212 16:04:50.639240 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a80047c-85a8-4930-824e-a6deb0957638" containerName="util" Dec 12 16:04:50 crc kubenswrapper[4693]: I1212 16:04:50.639389 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a80047c-85a8-4930-824e-a6deb0957638" containerName="util" Dec 12 16:04:50 crc kubenswrapper[4693]: E1212 16:04:50.639419 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a80047c-85a8-4930-824e-a6deb0957638" containerName="pull" Dec 12 16:04:50 crc kubenswrapper[4693]: I1212 16:04:50.639425 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a80047c-85a8-4930-824e-a6deb0957638" containerName="pull" Dec 12 16:04:50 crc kubenswrapper[4693]: E1212 16:04:50.639470 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a80047c-85a8-4930-824e-a6deb0957638" containerName="extract" Dec 12 16:04:50 crc kubenswrapper[4693]: I1212 16:04:50.639476 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a80047c-85a8-4930-824e-a6deb0957638" containerName="extract" Dec 12 16:04:50 crc kubenswrapper[4693]: I1212 16:04:50.640749 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a80047c-85a8-4930-824e-a6deb0957638" containerName="extract" Dec 12 16:04:50 crc kubenswrapper[4693]: I1212 16:04:50.641606 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-wzh4r" Dec 12 16:04:50 crc kubenswrapper[4693]: I1212 16:04:50.647108 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 12 16:04:50 crc kubenswrapper[4693]: I1212 16:04:50.647407 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 12 16:04:50 crc kubenswrapper[4693]: I1212 16:04:50.650969 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-g47ld" Dec 12 16:04:50 crc kubenswrapper[4693]: I1212 16:04:50.673349 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-wzh4r"] Dec 12 16:04:50 crc kubenswrapper[4693]: I1212 16:04:50.713086 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbbg9\" (UniqueName: \"kubernetes.io/projected/cf026df5-a8cf-42d4-a964-5518b8cf7339-kube-api-access-rbbg9\") pod \"nmstate-operator-6769fb99d-wzh4r\" (UID: \"cf026df5-a8cf-42d4-a964-5518b8cf7339\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-wzh4r" Dec 12 16:04:50 crc kubenswrapper[4693]: I1212 16:04:50.814726 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbbg9\" (UniqueName: \"kubernetes.io/projected/cf026df5-a8cf-42d4-a964-5518b8cf7339-kube-api-access-rbbg9\") pod \"nmstate-operator-6769fb99d-wzh4r\" (UID: \"cf026df5-a8cf-42d4-a964-5518b8cf7339\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-wzh4r" Dec 12 16:04:50 crc kubenswrapper[4693]: I1212 16:04:50.837240 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbbg9\" (UniqueName: \"kubernetes.io/projected/cf026df5-a8cf-42d4-a964-5518b8cf7339-kube-api-access-rbbg9\") pod \"nmstate-operator-6769fb99d-wzh4r\" (UID: \"cf026df5-a8cf-42d4-a964-5518b8cf7339\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-wzh4r" Dec 12 16:04:50 crc kubenswrapper[4693]: I1212 16:04:50.971077 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-wzh4r" Dec 12 16:04:51 crc kubenswrapper[4693]: I1212 16:04:51.402563 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-wzh4r"] Dec 12 16:04:51 crc kubenswrapper[4693]: I1212 16:04:51.505129 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-wzh4r" event={"ID":"cf026df5-a8cf-42d4-a964-5518b8cf7339","Type":"ContainerStarted","Data":"665086ba3d34d3550e5aba7c5319d235c31c79d939562382923915bf6b90edd9"} Dec 12 16:04:54 crc kubenswrapper[4693]: I1212 16:04:54.526627 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-wzh4r" event={"ID":"cf026df5-a8cf-42d4-a964-5518b8cf7339","Type":"ContainerStarted","Data":"dda95ecfd9f7c9b1643071395a16850ad377eb7f71c2070a63f6c9df60a9f342"} Dec 12 16:04:54 crc kubenswrapper[4693]: I1212 16:04:54.543008 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-wzh4r" podStartSLOduration=1.667266867 podStartE2EDuration="4.542988061s" podCreationTimestamp="2025-12-12 16:04:50 +0000 UTC" firstStartedPulling="2025-12-12 16:04:51.421974577 +0000 UTC m=+1118.590614178" lastFinishedPulling="2025-12-12 16:04:54.297695771 +0000 UTC m=+1121.466335372" observedRunningTime="2025-12-12 16:04:54.541927313 +0000 UTC m=+1121.710566924" watchObservedRunningTime="2025-12-12 16:04:54.542988061 +0000 UTC m=+1121.711627672" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.462825 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-428hg"] Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.464619 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-428hg" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.468945 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-2dvg8" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.472846 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-4jrn4"] Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.474333 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-4jrn4" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.476167 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.480875 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-428hg"] Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.488995 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-4jrn4"] Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.513360 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-z5g4l"] Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.518614 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-z5g4l" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.560838 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t996d\" (UniqueName: \"kubernetes.io/projected/4b75d97e-a5e9-44e7-8589-1eeb2e620672-kube-api-access-t996d\") pod \"nmstate-metrics-7f7f7578db-428hg\" (UID: \"4b75d97e-a5e9-44e7-8589-1eeb2e620672\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-428hg" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.560909 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/18a4d414-e872-4bb3-ae29-166fcc455a9a-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-4jrn4\" (UID: \"18a4d414-e872-4bb3-ae29-166fcc455a9a\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-4jrn4" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.561131 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnvgx\" (UniqueName: \"kubernetes.io/projected/18a4d414-e872-4bb3-ae29-166fcc455a9a-kube-api-access-qnvgx\") pod \"nmstate-webhook-f8fb84555-4jrn4\" (UID: \"18a4d414-e872-4bb3-ae29-166fcc455a9a\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-4jrn4" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.663034 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmrtp\" (UniqueName: \"kubernetes.io/projected/df1e4454-429c-4d2a-b372-b33ee0e88e6b-kube-api-access-dmrtp\") pod \"nmstate-handler-z5g4l\" (UID: \"df1e4454-429c-4d2a-b372-b33ee0e88e6b\") " pod="openshift-nmstate/nmstate-handler-z5g4l" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.663143 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/df1e4454-429c-4d2a-b372-b33ee0e88e6b-dbus-socket\") pod \"nmstate-handler-z5g4l\" (UID: \"df1e4454-429c-4d2a-b372-b33ee0e88e6b\") " pod="openshift-nmstate/nmstate-handler-z5g4l" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.663197 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/df1e4454-429c-4d2a-b372-b33ee0e88e6b-nmstate-lock\") pod \"nmstate-handler-z5g4l\" (UID: \"df1e4454-429c-4d2a-b372-b33ee0e88e6b\") " pod="openshift-nmstate/nmstate-handler-z5g4l" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.663244 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t996d\" (UniqueName: \"kubernetes.io/projected/4b75d97e-a5e9-44e7-8589-1eeb2e620672-kube-api-access-t996d\") pod \"nmstate-metrics-7f7f7578db-428hg\" (UID: \"4b75d97e-a5e9-44e7-8589-1eeb2e620672\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-428hg" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.663584 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/18a4d414-e872-4bb3-ae29-166fcc455a9a-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-4jrn4\" (UID: \"18a4d414-e872-4bb3-ae29-166fcc455a9a\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-4jrn4" Dec 12 16:04:59 crc kubenswrapper[4693]: E1212 16:04:59.663690 4693 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 12 16:04:59 crc kubenswrapper[4693]: E1212 16:04:59.663743 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18a4d414-e872-4bb3-ae29-166fcc455a9a-tls-key-pair podName:18a4d414-e872-4bb3-ae29-166fcc455a9a nodeName:}" failed. No retries permitted until 2025-12-12 16:05:00.16372707 +0000 UTC m=+1127.332366671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/18a4d414-e872-4bb3-ae29-166fcc455a9a-tls-key-pair") pod "nmstate-webhook-f8fb84555-4jrn4" (UID: "18a4d414-e872-4bb3-ae29-166fcc455a9a") : secret "openshift-nmstate-webhook" not found Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.663928 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/df1e4454-429c-4d2a-b372-b33ee0e88e6b-ovs-socket\") pod \"nmstate-handler-z5g4l\" (UID: \"df1e4454-429c-4d2a-b372-b33ee0e88e6b\") " pod="openshift-nmstate/nmstate-handler-z5g4l" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.663963 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnvgx\" (UniqueName: \"kubernetes.io/projected/18a4d414-e872-4bb3-ae29-166fcc455a9a-kube-api-access-qnvgx\") pod \"nmstate-webhook-f8fb84555-4jrn4\" (UID: \"18a4d414-e872-4bb3-ae29-166fcc455a9a\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-4jrn4" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.690672 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t996d\" (UniqueName: \"kubernetes.io/projected/4b75d97e-a5e9-44e7-8589-1eeb2e620672-kube-api-access-t996d\") pod \"nmstate-metrics-7f7f7578db-428hg\" (UID: \"4b75d97e-a5e9-44e7-8589-1eeb2e620672\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-428hg" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.697931 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-ln6lg"] Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.699001 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ln6lg" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.699673 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnvgx\" (UniqueName: \"kubernetes.io/projected/18a4d414-e872-4bb3-ae29-166fcc455a9a-kube-api-access-qnvgx\") pod \"nmstate-webhook-f8fb84555-4jrn4\" (UID: \"18a4d414-e872-4bb3-ae29-166fcc455a9a\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-4jrn4" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.702977 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-ln6lg"] Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.703670 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.703877 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.704118 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-n2pwd" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.765842 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k86sf\" (UniqueName: \"kubernetes.io/projected/6e086d96-bcae-47f7-b910-8682de2b1b11-kube-api-access-k86sf\") pod \"nmstate-console-plugin-6ff7998486-ln6lg\" (UID: \"6e086d96-bcae-47f7-b910-8682de2b1b11\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ln6lg" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.765895 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6e086d96-bcae-47f7-b910-8682de2b1b11-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-ln6lg\" (UID: \"6e086d96-bcae-47f7-b910-8682de2b1b11\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ln6lg" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.765919 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/df1e4454-429c-4d2a-b372-b33ee0e88e6b-ovs-socket\") pod \"nmstate-handler-z5g4l\" (UID: \"df1e4454-429c-4d2a-b372-b33ee0e88e6b\") " pod="openshift-nmstate/nmstate-handler-z5g4l" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.765963 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmrtp\" (UniqueName: \"kubernetes.io/projected/df1e4454-429c-4d2a-b372-b33ee0e88e6b-kube-api-access-dmrtp\") pod \"nmstate-handler-z5g4l\" (UID: \"df1e4454-429c-4d2a-b372-b33ee0e88e6b\") " pod="openshift-nmstate/nmstate-handler-z5g4l" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.765995 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/df1e4454-429c-4d2a-b372-b33ee0e88e6b-dbus-socket\") pod \"nmstate-handler-z5g4l\" (UID: \"df1e4454-429c-4d2a-b372-b33ee0e88e6b\") " pod="openshift-nmstate/nmstate-handler-z5g4l" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.766048 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/df1e4454-429c-4d2a-b372-b33ee0e88e6b-nmstate-lock\") pod \"nmstate-handler-z5g4l\" (UID: \"df1e4454-429c-4d2a-b372-b33ee0e88e6b\") " pod="openshift-nmstate/nmstate-handler-z5g4l" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.766069 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6e086d96-bcae-47f7-b910-8682de2b1b11-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-ln6lg\" (UID: \"6e086d96-bcae-47f7-b910-8682de2b1b11\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ln6lg" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.766141 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/df1e4454-429c-4d2a-b372-b33ee0e88e6b-ovs-socket\") pod \"nmstate-handler-z5g4l\" (UID: \"df1e4454-429c-4d2a-b372-b33ee0e88e6b\") " pod="openshift-nmstate/nmstate-handler-z5g4l" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.766176 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/df1e4454-429c-4d2a-b372-b33ee0e88e6b-nmstate-lock\") pod \"nmstate-handler-z5g4l\" (UID: \"df1e4454-429c-4d2a-b372-b33ee0e88e6b\") " pod="openshift-nmstate/nmstate-handler-z5g4l" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.766421 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/df1e4454-429c-4d2a-b372-b33ee0e88e6b-dbus-socket\") pod \"nmstate-handler-z5g4l\" (UID: \"df1e4454-429c-4d2a-b372-b33ee0e88e6b\") " pod="openshift-nmstate/nmstate-handler-z5g4l" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.789485 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmrtp\" (UniqueName: \"kubernetes.io/projected/df1e4454-429c-4d2a-b372-b33ee0e88e6b-kube-api-access-dmrtp\") pod \"nmstate-handler-z5g4l\" (UID: \"df1e4454-429c-4d2a-b372-b33ee0e88e6b\") " pod="openshift-nmstate/nmstate-handler-z5g4l" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.792162 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-428hg" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.834390 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-z5g4l" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.867574 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6e086d96-bcae-47f7-b910-8682de2b1b11-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-ln6lg\" (UID: \"6e086d96-bcae-47f7-b910-8682de2b1b11\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ln6lg" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.867664 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k86sf\" (UniqueName: \"kubernetes.io/projected/6e086d96-bcae-47f7-b910-8682de2b1b11-kube-api-access-k86sf\") pod \"nmstate-console-plugin-6ff7998486-ln6lg\" (UID: \"6e086d96-bcae-47f7-b910-8682de2b1b11\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ln6lg" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.867687 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6e086d96-bcae-47f7-b910-8682de2b1b11-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-ln6lg\" (UID: \"6e086d96-bcae-47f7-b910-8682de2b1b11\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ln6lg" Dec 12 16:04:59 crc kubenswrapper[4693]: E1212 16:04:59.867804 4693 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 12 16:04:59 crc kubenswrapper[4693]: E1212 16:04:59.867850 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e086d96-bcae-47f7-b910-8682de2b1b11-plugin-serving-cert podName:6e086d96-bcae-47f7-b910-8682de2b1b11 nodeName:}" failed. No retries permitted until 2025-12-12 16:05:00.367834735 +0000 UTC m=+1127.536474336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/6e086d96-bcae-47f7-b910-8682de2b1b11-plugin-serving-cert") pod "nmstate-console-plugin-6ff7998486-ln6lg" (UID: "6e086d96-bcae-47f7-b910-8682de2b1b11") : secret "plugin-serving-cert" not found Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.868782 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6e086d96-bcae-47f7-b910-8682de2b1b11-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-ln6lg\" (UID: \"6e086d96-bcae-47f7-b910-8682de2b1b11\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ln6lg" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.894906 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k86sf\" (UniqueName: \"kubernetes.io/projected/6e086d96-bcae-47f7-b910-8682de2b1b11-kube-api-access-k86sf\") pod \"nmstate-console-plugin-6ff7998486-ln6lg\" (UID: \"6e086d96-bcae-47f7-b910-8682de2b1b11\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ln6lg" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.933965 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d7d4f8db7-5gfv7"] Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.934899 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:04:59 crc kubenswrapper[4693]: I1212 16:04:59.949816 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d7d4f8db7-5gfv7"] Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.071393 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-service-ca\") pod \"console-5d7d4f8db7-5gfv7\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.071597 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c0e7e551-57f3-4891-972f-4532f2fd50c4-console-oauth-config\") pod \"console-5d7d4f8db7-5gfv7\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.071727 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-oauth-serving-cert\") pod \"console-5d7d4f8db7-5gfv7\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.071811 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-console-config\") pod \"console-5d7d4f8db7-5gfv7\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.071847 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsmlg\" (UniqueName: \"kubernetes.io/projected/c0e7e551-57f3-4891-972f-4532f2fd50c4-kube-api-access-bsmlg\") pod \"console-5d7d4f8db7-5gfv7\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.071899 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-trusted-ca-bundle\") pod \"console-5d7d4f8db7-5gfv7\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.071951 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0e7e551-57f3-4891-972f-4532f2fd50c4-console-serving-cert\") pod \"console-5d7d4f8db7-5gfv7\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.173289 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/18a4d414-e872-4bb3-ae29-166fcc455a9a-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-4jrn4\" (UID: \"18a4d414-e872-4bb3-ae29-166fcc455a9a\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-4jrn4" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.173380 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c0e7e551-57f3-4891-972f-4532f2fd50c4-console-oauth-config\") pod \"console-5d7d4f8db7-5gfv7\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.173449 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-oauth-serving-cert\") pod \"console-5d7d4f8db7-5gfv7\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.173486 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-console-config\") pod \"console-5d7d4f8db7-5gfv7\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.173510 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsmlg\" (UniqueName: \"kubernetes.io/projected/c0e7e551-57f3-4891-972f-4532f2fd50c4-kube-api-access-bsmlg\") pod \"console-5d7d4f8db7-5gfv7\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.173555 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-trusted-ca-bundle\") pod \"console-5d7d4f8db7-5gfv7\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.173595 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0e7e551-57f3-4891-972f-4532f2fd50c4-console-serving-cert\") pod \"console-5d7d4f8db7-5gfv7\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.173660 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-service-ca\") pod \"console-5d7d4f8db7-5gfv7\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.174466 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-console-config\") pod \"console-5d7d4f8db7-5gfv7\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.174643 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-oauth-serving-cert\") pod \"console-5d7d4f8db7-5gfv7\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.175194 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-service-ca\") pod \"console-5d7d4f8db7-5gfv7\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.175908 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-trusted-ca-bundle\") pod \"console-5d7d4f8db7-5gfv7\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.177661 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0e7e551-57f3-4891-972f-4532f2fd50c4-console-serving-cert\") pod \"console-5d7d4f8db7-5gfv7\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.177736 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/18a4d414-e872-4bb3-ae29-166fcc455a9a-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-4jrn4\" (UID: \"18a4d414-e872-4bb3-ae29-166fcc455a9a\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-4jrn4" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.179407 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-428hg"] Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.180301 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c0e7e551-57f3-4891-972f-4532f2fd50c4-console-oauth-config\") pod \"console-5d7d4f8db7-5gfv7\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.197093 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsmlg\" (UniqueName: \"kubernetes.io/projected/c0e7e551-57f3-4891-972f-4532f2fd50c4-kube-api-access-bsmlg\") pod \"console-5d7d4f8db7-5gfv7\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.254206 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.377080 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6e086d96-bcae-47f7-b910-8682de2b1b11-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-ln6lg\" (UID: \"6e086d96-bcae-47f7-b910-8682de2b1b11\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ln6lg" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.381229 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6e086d96-bcae-47f7-b910-8682de2b1b11-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-ln6lg\" (UID: \"6e086d96-bcae-47f7-b910-8682de2b1b11\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ln6lg" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.402897 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-4jrn4" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.569512 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-z5g4l" event={"ID":"df1e4454-429c-4d2a-b372-b33ee0e88e6b","Type":"ContainerStarted","Data":"1de84b5030f47559f711edfe9919b5e65762a35fc654bce74116395411e15bcd"} Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.572187 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-428hg" event={"ID":"4b75d97e-a5e9-44e7-8589-1eeb2e620672","Type":"ContainerStarted","Data":"57c002fe2713b1aae22cf73197a414ee0d2a526b365d966a95820e6f8dcaa89c"} Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.636445 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-4jrn4"] Dec 12 16:05:00 crc kubenswrapper[4693]: W1212 16:05:00.639536 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18a4d414_e872_4bb3_ae29_166fcc455a9a.slice/crio-47d9780cbb334c2da0b1d8d23a2c088bd064bbc7e05e68e2b8b5d8452e05dae6 WatchSource:0}: Error finding container 47d9780cbb334c2da0b1d8d23a2c088bd064bbc7e05e68e2b8b5d8452e05dae6: Status 404 returned error can't find the container with id 47d9780cbb334c2da0b1d8d23a2c088bd064bbc7e05e68e2b8b5d8452e05dae6 Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.658853 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ln6lg" Dec 12 16:05:00 crc kubenswrapper[4693]: I1212 16:05:00.698529 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d7d4f8db7-5gfv7"] Dec 12 16:05:01 crc kubenswrapper[4693]: I1212 16:05:01.060723 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-ln6lg"] Dec 12 16:05:01 crc kubenswrapper[4693]: W1212 16:05:01.063639 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e086d96_bcae_47f7_b910_8682de2b1b11.slice/crio-c660a218312e868a79c14b32ea823900b377c21240373f0d641acd7534b46a28 WatchSource:0}: Error finding container c660a218312e868a79c14b32ea823900b377c21240373f0d641acd7534b46a28: Status 404 returned error can't find the container with id c660a218312e868a79c14b32ea823900b377c21240373f0d641acd7534b46a28 Dec 12 16:05:01 crc kubenswrapper[4693]: I1212 16:05:01.581576 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ln6lg" event={"ID":"6e086d96-bcae-47f7-b910-8682de2b1b11","Type":"ContainerStarted","Data":"c660a218312e868a79c14b32ea823900b377c21240373f0d641acd7534b46a28"} Dec 12 16:05:01 crc kubenswrapper[4693]: I1212 16:05:01.584005 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d7d4f8db7-5gfv7" event={"ID":"c0e7e551-57f3-4891-972f-4532f2fd50c4","Type":"ContainerStarted","Data":"dfa5f8533d6aa2d2d578cab261b1cac64452a8f7fd40ddffd23e1f945f613caa"} Dec 12 16:05:01 crc kubenswrapper[4693]: I1212 16:05:01.585650 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-4jrn4" event={"ID":"18a4d414-e872-4bb3-ae29-166fcc455a9a","Type":"ContainerStarted","Data":"47d9780cbb334c2da0b1d8d23a2c088bd064bbc7e05e68e2b8b5d8452e05dae6"} Dec 12 16:05:02 crc kubenswrapper[4693]: I1212 16:05:02.596447 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d7d4f8db7-5gfv7" event={"ID":"c0e7e551-57f3-4891-972f-4532f2fd50c4","Type":"ContainerStarted","Data":"b3b2d924529d3f214244ec426a386be6f5a6d98e91c3ab05a3b25ed525025ffc"} Dec 12 16:05:02 crc kubenswrapper[4693]: I1212 16:05:02.625448 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d7d4f8db7-5gfv7" podStartSLOduration=3.62427414 podStartE2EDuration="3.62427414s" podCreationTimestamp="2025-12-12 16:04:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:05:02.614918969 +0000 UTC m=+1129.783558590" watchObservedRunningTime="2025-12-12 16:05:02.62427414 +0000 UTC m=+1129.792913731" Dec 12 16:05:08 crc kubenswrapper[4693]: I1212 16:05:08.828897 4693 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 16:05:09 crc kubenswrapper[4693]: I1212 16:05:09.664840 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-428hg" event={"ID":"4b75d97e-a5e9-44e7-8589-1eeb2e620672","Type":"ContainerStarted","Data":"d21fa42cf24f5532f48ec314765c59c80e1dfdcca16a15901f2372e2cc3eb83c"} Dec 12 16:05:09 crc kubenswrapper[4693]: I1212 16:05:09.670819 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ln6lg" event={"ID":"6e086d96-bcae-47f7-b910-8682de2b1b11","Type":"ContainerStarted","Data":"466c5532fdc143a5bd84a119c54df7cd59d3d3dce6b8d3f5269ababeeed09cfd"} Dec 12 16:05:09 crc kubenswrapper[4693]: I1212 16:05:09.675709 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-z5g4l" event={"ID":"df1e4454-429c-4d2a-b372-b33ee0e88e6b","Type":"ContainerStarted","Data":"8fef6fce2778dd4cd3be6ad7583ea114eafc23e2d21394e44ffe99a3bddaa154"} Dec 12 16:05:09 crc kubenswrapper[4693]: I1212 16:05:09.675860 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-z5g4l" Dec 12 16:05:09 crc kubenswrapper[4693]: I1212 16:05:09.680973 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-4jrn4" event={"ID":"18a4d414-e872-4bb3-ae29-166fcc455a9a","Type":"ContainerStarted","Data":"9326c812d78e954d9609c5603189e19af5ee05aa233e9d4472df945fafe10f9c"} Dec 12 16:05:09 crc kubenswrapper[4693]: I1212 16:05:09.681140 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-4jrn4" Dec 12 16:05:09 crc kubenswrapper[4693]: I1212 16:05:09.711789 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ln6lg" podStartSLOduration=3.338162225 podStartE2EDuration="10.711745898s" podCreationTimestamp="2025-12-12 16:04:59 +0000 UTC" firstStartedPulling="2025-12-12 16:05:01.06572306 +0000 UTC m=+1128.234362661" lastFinishedPulling="2025-12-12 16:05:08.439306723 +0000 UTC m=+1135.607946334" observedRunningTime="2025-12-12 16:05:09.688174745 +0000 UTC m=+1136.856814346" watchObservedRunningTime="2025-12-12 16:05:09.711745898 +0000 UTC m=+1136.880385519" Dec 12 16:05:09 crc kubenswrapper[4693]: I1212 16:05:09.729148 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-4jrn4" podStartSLOduration=2.949251381 podStartE2EDuration="10.729128644s" podCreationTimestamp="2025-12-12 16:04:59 +0000 UTC" firstStartedPulling="2025-12-12 16:05:00.641751286 +0000 UTC m=+1127.810390897" lastFinishedPulling="2025-12-12 16:05:08.421628559 +0000 UTC m=+1135.590268160" observedRunningTime="2025-12-12 16:05:09.709312053 +0000 UTC m=+1136.877951664" watchObservedRunningTime="2025-12-12 16:05:09.729128644 +0000 UTC m=+1136.897768245" Dec 12 16:05:09 crc kubenswrapper[4693]: I1212 16:05:09.735077 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-z5g4l" podStartSLOduration=2.649544521 podStartE2EDuration="10.735059833s" podCreationTimestamp="2025-12-12 16:04:59 +0000 UTC" firstStartedPulling="2025-12-12 16:04:59.89898195 +0000 UTC m=+1127.067621551" lastFinishedPulling="2025-12-12 16:05:07.984497262 +0000 UTC m=+1135.153136863" observedRunningTime="2025-12-12 16:05:09.728183019 +0000 UTC m=+1136.896822620" watchObservedRunningTime="2025-12-12 16:05:09.735059833 +0000 UTC m=+1136.903699434" Dec 12 16:05:10 crc kubenswrapper[4693]: I1212 16:05:10.254804 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:10 crc kubenswrapper[4693]: I1212 16:05:10.255492 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:10 crc kubenswrapper[4693]: I1212 16:05:10.259569 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:10 crc kubenswrapper[4693]: I1212 16:05:10.697031 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:05:10 crc kubenswrapper[4693]: I1212 16:05:10.770374 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7b8649454d-tq4z4"] Dec 12 16:05:13 crc kubenswrapper[4693]: I1212 16:05:13.715772 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-428hg" event={"ID":"4b75d97e-a5e9-44e7-8589-1eeb2e620672","Type":"ContainerStarted","Data":"629932195e05ec9548ea4d0e9cb210bced7861ac5d295bfe23e3bfe8a7f73f2b"} Dec 12 16:05:13 crc kubenswrapper[4693]: I1212 16:05:13.736906 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-428hg" podStartSLOduration=2.312710736 podStartE2EDuration="14.736880316s" podCreationTimestamp="2025-12-12 16:04:59 +0000 UTC" firstStartedPulling="2025-12-12 16:05:00.190800059 +0000 UTC m=+1127.359439660" lastFinishedPulling="2025-12-12 16:05:12.614969639 +0000 UTC m=+1139.783609240" observedRunningTime="2025-12-12 16:05:13.731001668 +0000 UTC m=+1140.899641269" watchObservedRunningTime="2025-12-12 16:05:13.736880316 +0000 UTC m=+1140.905519917" Dec 12 16:05:14 crc kubenswrapper[4693]: I1212 16:05:14.861612 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-z5g4l" Dec 12 16:05:20 crc kubenswrapper[4693]: I1212 16:05:20.411190 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-4jrn4" Dec 12 16:05:35 crc kubenswrapper[4693]: I1212 16:05:35.814120 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7b8649454d-tq4z4" podUID="41ef7a5e-ff14-4a24-9aae-357f1f06e874" containerName="console" containerID="cri-o://73912ce5f6930e39ea46f4ebad8eca56eefd55a44d530d0eb7ed8b5623b1dae2" gracePeriod=15 Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.751491 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7b8649454d-tq4z4_41ef7a5e-ff14-4a24-9aae-357f1f06e874/console/0.log" Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.752244 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.919246 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7b8649454d-tq4z4_41ef7a5e-ff14-4a24-9aae-357f1f06e874/console/0.log" Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.919762 4693 generic.go:334] "Generic (PLEG): container finished" podID="41ef7a5e-ff14-4a24-9aae-357f1f06e874" containerID="73912ce5f6930e39ea46f4ebad8eca56eefd55a44d530d0eb7ed8b5623b1dae2" exitCode=2 Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.919805 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b8649454d-tq4z4" event={"ID":"41ef7a5e-ff14-4a24-9aae-357f1f06e874","Type":"ContainerDied","Data":"73912ce5f6930e39ea46f4ebad8eca56eefd55a44d530d0eb7ed8b5623b1dae2"} Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.919826 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b8649454d-tq4z4" Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.919840 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b8649454d-tq4z4" event={"ID":"41ef7a5e-ff14-4a24-9aae-357f1f06e874","Type":"ContainerDied","Data":"5a1b0cb3bc5eb1412791fe1e103afaaa64d0928e345b3b1f8d886b8652172bf1"} Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.919863 4693 scope.go:117] "RemoveContainer" containerID="73912ce5f6930e39ea46f4ebad8eca56eefd55a44d530d0eb7ed8b5623b1dae2" Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.923950 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tkls\" (UniqueName: \"kubernetes.io/projected/41ef7a5e-ff14-4a24-9aae-357f1f06e874-kube-api-access-7tkls\") pod \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.925838 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-trusted-ca-bundle\") pod \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.925968 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-oauth-serving-cert\") pod \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.926874 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "41ef7a5e-ff14-4a24-9aae-357f1f06e874" (UID: "41ef7a5e-ff14-4a24-9aae-357f1f06e874"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.926943 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "41ef7a5e-ff14-4a24-9aae-357f1f06e874" (UID: "41ef7a5e-ff14-4a24-9aae-357f1f06e874"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.927525 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/41ef7a5e-ff14-4a24-9aae-357f1f06e874-console-oauth-config\") pod \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.927642 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-service-ca\") pod \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.927723 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-console-config\") pod \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.927808 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/41ef7a5e-ff14-4a24-9aae-357f1f06e874-console-serving-cert\") pod \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\" (UID: \"41ef7a5e-ff14-4a24-9aae-357f1f06e874\") " Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.928086 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-console-config" (OuterVolumeSpecName: "console-config") pod "41ef7a5e-ff14-4a24-9aae-357f1f06e874" (UID: "41ef7a5e-ff14-4a24-9aae-357f1f06e874"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.928227 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-service-ca" (OuterVolumeSpecName: "service-ca") pod "41ef7a5e-ff14-4a24-9aae-357f1f06e874" (UID: "41ef7a5e-ff14-4a24-9aae-357f1f06e874"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.928641 4693 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.928731 4693 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-console-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.928813 4693 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.928900 4693 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/41ef7a5e-ff14-4a24-9aae-357f1f06e874-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.933596 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ef7a5e-ff14-4a24-9aae-357f1f06e874-kube-api-access-7tkls" (OuterVolumeSpecName: "kube-api-access-7tkls") pod "41ef7a5e-ff14-4a24-9aae-357f1f06e874" (UID: "41ef7a5e-ff14-4a24-9aae-357f1f06e874"). InnerVolumeSpecName "kube-api-access-7tkls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.933859 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41ef7a5e-ff14-4a24-9aae-357f1f06e874-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "41ef7a5e-ff14-4a24-9aae-357f1f06e874" (UID: "41ef7a5e-ff14-4a24-9aae-357f1f06e874"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:05:36 crc kubenswrapper[4693]: I1212 16:05:36.933887 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41ef7a5e-ff14-4a24-9aae-357f1f06e874-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "41ef7a5e-ff14-4a24-9aae-357f1f06e874" (UID: "41ef7a5e-ff14-4a24-9aae-357f1f06e874"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:05:37 crc kubenswrapper[4693]: I1212 16:05:37.003545 4693 scope.go:117] "RemoveContainer" containerID="73912ce5f6930e39ea46f4ebad8eca56eefd55a44d530d0eb7ed8b5623b1dae2" Dec 12 16:05:37 crc kubenswrapper[4693]: E1212 16:05:37.004733 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73912ce5f6930e39ea46f4ebad8eca56eefd55a44d530d0eb7ed8b5623b1dae2\": container with ID starting with 73912ce5f6930e39ea46f4ebad8eca56eefd55a44d530d0eb7ed8b5623b1dae2 not found: ID does not exist" containerID="73912ce5f6930e39ea46f4ebad8eca56eefd55a44d530d0eb7ed8b5623b1dae2" Dec 12 16:05:37 crc kubenswrapper[4693]: I1212 16:05:37.004767 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73912ce5f6930e39ea46f4ebad8eca56eefd55a44d530d0eb7ed8b5623b1dae2"} err="failed to get container status \"73912ce5f6930e39ea46f4ebad8eca56eefd55a44d530d0eb7ed8b5623b1dae2\": rpc error: code = NotFound desc = could not find container \"73912ce5f6930e39ea46f4ebad8eca56eefd55a44d530d0eb7ed8b5623b1dae2\": container with ID starting with 73912ce5f6930e39ea46f4ebad8eca56eefd55a44d530d0eb7ed8b5623b1dae2 not found: ID does not exist" Dec 12 16:05:37 crc kubenswrapper[4693]: I1212 16:05:37.029832 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tkls\" (UniqueName: \"kubernetes.io/projected/41ef7a5e-ff14-4a24-9aae-357f1f06e874-kube-api-access-7tkls\") on node \"crc\" DevicePath \"\"" Dec 12 16:05:37 crc kubenswrapper[4693]: I1212 16:05:37.030056 4693 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/41ef7a5e-ff14-4a24-9aae-357f1f06e874-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:05:37 crc kubenswrapper[4693]: I1212 16:05:37.030117 4693 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/41ef7a5e-ff14-4a24-9aae-357f1f06e874-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:05:37 crc kubenswrapper[4693]: I1212 16:05:37.252326 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7b8649454d-tq4z4"] Dec 12 16:05:37 crc kubenswrapper[4693]: I1212 16:05:37.261836 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7b8649454d-tq4z4"] Dec 12 16:05:37 crc kubenswrapper[4693]: I1212 16:05:37.367297 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41ef7a5e-ff14-4a24-9aae-357f1f06e874" path="/var/lib/kubelet/pods/41ef7a5e-ff14-4a24-9aae-357f1f06e874/volumes" Dec 12 16:05:40 crc kubenswrapper[4693]: I1212 16:05:40.916983 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd"] Dec 12 16:05:40 crc kubenswrapper[4693]: E1212 16:05:40.917858 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ef7a5e-ff14-4a24-9aae-357f1f06e874" containerName="console" Dec 12 16:05:40 crc kubenswrapper[4693]: I1212 16:05:40.917874 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ef7a5e-ff14-4a24-9aae-357f1f06e874" containerName="console" Dec 12 16:05:40 crc kubenswrapper[4693]: I1212 16:05:40.918044 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="41ef7a5e-ff14-4a24-9aae-357f1f06e874" containerName="console" Dec 12 16:05:40 crc kubenswrapper[4693]: I1212 16:05:40.919294 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd" Dec 12 16:05:40 crc kubenswrapper[4693]: I1212 16:05:40.923147 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 12 16:05:40 crc kubenswrapper[4693]: I1212 16:05:40.943463 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd"] Dec 12 16:05:40 crc kubenswrapper[4693]: I1212 16:05:40.997114 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8462\" (UniqueName: \"kubernetes.io/projected/40d42c88-5e30-40f3-81e3-a7bb55f21523-kube-api-access-z8462\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd\" (UID: \"40d42c88-5e30-40f3-81e3-a7bb55f21523\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd" Dec 12 16:05:40 crc kubenswrapper[4693]: I1212 16:05:40.997168 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40d42c88-5e30-40f3-81e3-a7bb55f21523-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd\" (UID: \"40d42c88-5e30-40f3-81e3-a7bb55f21523\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd" Dec 12 16:05:40 crc kubenswrapper[4693]: I1212 16:05:40.997195 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40d42c88-5e30-40f3-81e3-a7bb55f21523-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd\" (UID: \"40d42c88-5e30-40f3-81e3-a7bb55f21523\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd" Dec 12 16:05:41 crc kubenswrapper[4693]: I1212 16:05:41.097953 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8462\" (UniqueName: \"kubernetes.io/projected/40d42c88-5e30-40f3-81e3-a7bb55f21523-kube-api-access-z8462\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd\" (UID: \"40d42c88-5e30-40f3-81e3-a7bb55f21523\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd" Dec 12 16:05:41 crc kubenswrapper[4693]: I1212 16:05:41.098008 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40d42c88-5e30-40f3-81e3-a7bb55f21523-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd\" (UID: \"40d42c88-5e30-40f3-81e3-a7bb55f21523\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd" Dec 12 16:05:41 crc kubenswrapper[4693]: I1212 16:05:41.098032 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40d42c88-5e30-40f3-81e3-a7bb55f21523-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd\" (UID: \"40d42c88-5e30-40f3-81e3-a7bb55f21523\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd" Dec 12 16:05:41 crc kubenswrapper[4693]: I1212 16:05:41.098661 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40d42c88-5e30-40f3-81e3-a7bb55f21523-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd\" (UID: \"40d42c88-5e30-40f3-81e3-a7bb55f21523\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd" Dec 12 16:05:41 crc kubenswrapper[4693]: I1212 16:05:41.099140 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40d42c88-5e30-40f3-81e3-a7bb55f21523-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd\" (UID: \"40d42c88-5e30-40f3-81e3-a7bb55f21523\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd" Dec 12 16:05:41 crc kubenswrapper[4693]: I1212 16:05:41.126236 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8462\" (UniqueName: \"kubernetes.io/projected/40d42c88-5e30-40f3-81e3-a7bb55f21523-kube-api-access-z8462\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd\" (UID: \"40d42c88-5e30-40f3-81e3-a7bb55f21523\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd" Dec 12 16:05:41 crc kubenswrapper[4693]: I1212 16:05:41.242550 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd" Dec 12 16:05:41 crc kubenswrapper[4693]: I1212 16:05:41.719192 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd"] Dec 12 16:05:41 crc kubenswrapper[4693]: W1212 16:05:41.724684 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40d42c88_5e30_40f3_81e3_a7bb55f21523.slice/crio-42c896ccb7314aec728df1a4f1d859f685f8b6ea1410030dba77bc3085186b33 WatchSource:0}: Error finding container 42c896ccb7314aec728df1a4f1d859f685f8b6ea1410030dba77bc3085186b33: Status 404 returned error can't find the container with id 42c896ccb7314aec728df1a4f1d859f685f8b6ea1410030dba77bc3085186b33 Dec 12 16:05:41 crc kubenswrapper[4693]: I1212 16:05:41.969659 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd" event={"ID":"40d42c88-5e30-40f3-81e3-a7bb55f21523","Type":"ContainerStarted","Data":"42c896ccb7314aec728df1a4f1d859f685f8b6ea1410030dba77bc3085186b33"} Dec 12 16:05:42 crc kubenswrapper[4693]: I1212 16:05:42.530062 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:05:42 crc kubenswrapper[4693]: I1212 16:05:42.530330 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:05:42 crc kubenswrapper[4693]: I1212 16:05:42.977192 4693 generic.go:334] "Generic (PLEG): container finished" podID="40d42c88-5e30-40f3-81e3-a7bb55f21523" containerID="5e45ed295815a0d5bc282cbd6606fa906d73cd79f0cea2676f34c3b5f9dda690" exitCode=0 Dec 12 16:05:42 crc kubenswrapper[4693]: I1212 16:05:42.977262 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd" event={"ID":"40d42c88-5e30-40f3-81e3-a7bb55f21523","Type":"ContainerDied","Data":"5e45ed295815a0d5bc282cbd6606fa906d73cd79f0cea2676f34c3b5f9dda690"} Dec 12 16:05:44 crc kubenswrapper[4693]: I1212 16:05:44.992607 4693 generic.go:334] "Generic (PLEG): container finished" podID="40d42c88-5e30-40f3-81e3-a7bb55f21523" containerID="68a85d31080c52b64c3687f4ddfe149bcc5b38dcc0ba0a2159a7c0bec1433e45" exitCode=0 Dec 12 16:05:44 crc kubenswrapper[4693]: I1212 16:05:44.992699 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd" event={"ID":"40d42c88-5e30-40f3-81e3-a7bb55f21523","Type":"ContainerDied","Data":"68a85d31080c52b64c3687f4ddfe149bcc5b38dcc0ba0a2159a7c0bec1433e45"} Dec 12 16:05:46 crc kubenswrapper[4693]: I1212 16:05:46.001067 4693 generic.go:334] "Generic (PLEG): container finished" podID="40d42c88-5e30-40f3-81e3-a7bb55f21523" containerID="4163ac5d06deef4f4768f445b5ed06a71d7825b976083a3bc4175138e8e41dc6" exitCode=0 Dec 12 16:05:46 crc kubenswrapper[4693]: I1212 16:05:46.001245 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd" event={"ID":"40d42c88-5e30-40f3-81e3-a7bb55f21523","Type":"ContainerDied","Data":"4163ac5d06deef4f4768f445b5ed06a71d7825b976083a3bc4175138e8e41dc6"} Dec 12 16:05:47 crc kubenswrapper[4693]: I1212 16:05:47.251136 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd" Dec 12 16:05:47 crc kubenswrapper[4693]: I1212 16:05:47.401153 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8462\" (UniqueName: \"kubernetes.io/projected/40d42c88-5e30-40f3-81e3-a7bb55f21523-kube-api-access-z8462\") pod \"40d42c88-5e30-40f3-81e3-a7bb55f21523\" (UID: \"40d42c88-5e30-40f3-81e3-a7bb55f21523\") " Dec 12 16:05:47 crc kubenswrapper[4693]: I1212 16:05:47.401571 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40d42c88-5e30-40f3-81e3-a7bb55f21523-util\") pod \"40d42c88-5e30-40f3-81e3-a7bb55f21523\" (UID: \"40d42c88-5e30-40f3-81e3-a7bb55f21523\") " Dec 12 16:05:47 crc kubenswrapper[4693]: I1212 16:05:47.401613 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40d42c88-5e30-40f3-81e3-a7bb55f21523-bundle\") pod \"40d42c88-5e30-40f3-81e3-a7bb55f21523\" (UID: \"40d42c88-5e30-40f3-81e3-a7bb55f21523\") " Dec 12 16:05:47 crc kubenswrapper[4693]: I1212 16:05:47.403855 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40d42c88-5e30-40f3-81e3-a7bb55f21523-bundle" (OuterVolumeSpecName: "bundle") pod "40d42c88-5e30-40f3-81e3-a7bb55f21523" (UID: "40d42c88-5e30-40f3-81e3-a7bb55f21523"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:05:47 crc kubenswrapper[4693]: I1212 16:05:47.412909 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40d42c88-5e30-40f3-81e3-a7bb55f21523-kube-api-access-z8462" (OuterVolumeSpecName: "kube-api-access-z8462") pod "40d42c88-5e30-40f3-81e3-a7bb55f21523" (UID: "40d42c88-5e30-40f3-81e3-a7bb55f21523"). InnerVolumeSpecName "kube-api-access-z8462". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:05:47 crc kubenswrapper[4693]: I1212 16:05:47.414806 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40d42c88-5e30-40f3-81e3-a7bb55f21523-util" (OuterVolumeSpecName: "util") pod "40d42c88-5e30-40f3-81e3-a7bb55f21523" (UID: "40d42c88-5e30-40f3-81e3-a7bb55f21523"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:05:47 crc kubenswrapper[4693]: I1212 16:05:47.502869 4693 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40d42c88-5e30-40f3-81e3-a7bb55f21523-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:05:47 crc kubenswrapper[4693]: I1212 16:05:47.502905 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8462\" (UniqueName: \"kubernetes.io/projected/40d42c88-5e30-40f3-81e3-a7bb55f21523-kube-api-access-z8462\") on node \"crc\" DevicePath \"\"" Dec 12 16:05:47 crc kubenswrapper[4693]: I1212 16:05:47.502914 4693 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40d42c88-5e30-40f3-81e3-a7bb55f21523-util\") on node \"crc\" DevicePath \"\"" Dec 12 16:05:48 crc kubenswrapper[4693]: I1212 16:05:48.017892 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd" event={"ID":"40d42c88-5e30-40f3-81e3-a7bb55f21523","Type":"ContainerDied","Data":"42c896ccb7314aec728df1a4f1d859f685f8b6ea1410030dba77bc3085186b33"} Dec 12 16:05:48 crc kubenswrapper[4693]: I1212 16:05:48.017948 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42c896ccb7314aec728df1a4f1d859f685f8b6ea1410030dba77bc3085186b33" Dec 12 16:05:48 crc kubenswrapper[4693]: I1212 16:05:48.017977 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jt8wd" Dec 12 16:05:55 crc kubenswrapper[4693]: I1212 16:05:55.956990 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl"] Dec 12 16:05:55 crc kubenswrapper[4693]: E1212 16:05:55.957990 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40d42c88-5e30-40f3-81e3-a7bb55f21523" containerName="pull" Dec 12 16:05:55 crc kubenswrapper[4693]: I1212 16:05:55.958010 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="40d42c88-5e30-40f3-81e3-a7bb55f21523" containerName="pull" Dec 12 16:05:55 crc kubenswrapper[4693]: E1212 16:05:55.958034 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40d42c88-5e30-40f3-81e3-a7bb55f21523" containerName="util" Dec 12 16:05:55 crc kubenswrapper[4693]: I1212 16:05:55.958042 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="40d42c88-5e30-40f3-81e3-a7bb55f21523" containerName="util" Dec 12 16:05:55 crc kubenswrapper[4693]: E1212 16:05:55.958059 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40d42c88-5e30-40f3-81e3-a7bb55f21523" containerName="extract" Dec 12 16:05:55 crc kubenswrapper[4693]: I1212 16:05:55.958069 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="40d42c88-5e30-40f3-81e3-a7bb55f21523" containerName="extract" Dec 12 16:05:55 crc kubenswrapper[4693]: I1212 16:05:55.960257 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="40d42c88-5e30-40f3-81e3-a7bb55f21523" containerName="extract" Dec 12 16:05:55 crc kubenswrapper[4693]: I1212 16:05:55.962365 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl" Dec 12 16:05:55 crc kubenswrapper[4693]: I1212 16:05:55.971124 4693 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-k8tb8" Dec 12 16:05:55 crc kubenswrapper[4693]: I1212 16:05:55.971383 4693 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 12 16:05:55 crc kubenswrapper[4693]: I1212 16:05:55.971133 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 12 16:05:55 crc kubenswrapper[4693]: I1212 16:05:55.971784 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 12 16:05:55 crc kubenswrapper[4693]: I1212 16:05:55.972054 4693 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.000067 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl"] Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.051026 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/955c500c-bfaa-463d-b207-fcf0bd9bd9f2-webhook-cert\") pod \"metallb-operator-controller-manager-7744f9564f-bwttl\" (UID: \"955c500c-bfaa-463d-b207-fcf0bd9bd9f2\") " pod="metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.051080 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/955c500c-bfaa-463d-b207-fcf0bd9bd9f2-apiservice-cert\") pod \"metallb-operator-controller-manager-7744f9564f-bwttl\" (UID: \"955c500c-bfaa-463d-b207-fcf0bd9bd9f2\") " pod="metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.051108 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvlbc\" (UniqueName: \"kubernetes.io/projected/955c500c-bfaa-463d-b207-fcf0bd9bd9f2-kube-api-access-lvlbc\") pod \"metallb-operator-controller-manager-7744f9564f-bwttl\" (UID: \"955c500c-bfaa-463d-b207-fcf0bd9bd9f2\") " pod="metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.152378 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/955c500c-bfaa-463d-b207-fcf0bd9bd9f2-webhook-cert\") pod \"metallb-operator-controller-manager-7744f9564f-bwttl\" (UID: \"955c500c-bfaa-463d-b207-fcf0bd9bd9f2\") " pod="metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.152443 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/955c500c-bfaa-463d-b207-fcf0bd9bd9f2-apiservice-cert\") pod \"metallb-operator-controller-manager-7744f9564f-bwttl\" (UID: \"955c500c-bfaa-463d-b207-fcf0bd9bd9f2\") " pod="metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.152482 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvlbc\" (UniqueName: \"kubernetes.io/projected/955c500c-bfaa-463d-b207-fcf0bd9bd9f2-kube-api-access-lvlbc\") pod \"metallb-operator-controller-manager-7744f9564f-bwttl\" (UID: \"955c500c-bfaa-463d-b207-fcf0bd9bd9f2\") " pod="metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.160630 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/955c500c-bfaa-463d-b207-fcf0bd9bd9f2-webhook-cert\") pod \"metallb-operator-controller-manager-7744f9564f-bwttl\" (UID: \"955c500c-bfaa-463d-b207-fcf0bd9bd9f2\") " pod="metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.181801 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/955c500c-bfaa-463d-b207-fcf0bd9bd9f2-apiservice-cert\") pod \"metallb-operator-controller-manager-7744f9564f-bwttl\" (UID: \"955c500c-bfaa-463d-b207-fcf0bd9bd9f2\") " pod="metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.189915 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvlbc\" (UniqueName: \"kubernetes.io/projected/955c500c-bfaa-463d-b207-fcf0bd9bd9f2-kube-api-access-lvlbc\") pod \"metallb-operator-controller-manager-7744f9564f-bwttl\" (UID: \"955c500c-bfaa-463d-b207-fcf0bd9bd9f2\") " pod="metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.327857 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.461558 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj"] Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.463256 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.465960 4693 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.466216 4693 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-8bjjk" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.468071 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/26cbeab5-89ba-425a-87c0-8797ceb6957a-apiservice-cert\") pod \"metallb-operator-webhook-server-d677b6fd-mw5qj\" (UID: \"26cbeab5-89ba-425a-87c0-8797ceb6957a\") " pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.468133 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/26cbeab5-89ba-425a-87c0-8797ceb6957a-webhook-cert\") pod \"metallb-operator-webhook-server-d677b6fd-mw5qj\" (UID: \"26cbeab5-89ba-425a-87c0-8797ceb6957a\") " pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.468183 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5d9d\" (UniqueName: \"kubernetes.io/projected/26cbeab5-89ba-425a-87c0-8797ceb6957a-kube-api-access-v5d9d\") pod \"metallb-operator-webhook-server-d677b6fd-mw5qj\" (UID: \"26cbeab5-89ba-425a-87c0-8797ceb6957a\") " pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.470759 4693 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.503018 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj"] Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.570255 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/26cbeab5-89ba-425a-87c0-8797ceb6957a-apiservice-cert\") pod \"metallb-operator-webhook-server-d677b6fd-mw5qj\" (UID: \"26cbeab5-89ba-425a-87c0-8797ceb6957a\") " pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.570799 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/26cbeab5-89ba-425a-87c0-8797ceb6957a-webhook-cert\") pod \"metallb-operator-webhook-server-d677b6fd-mw5qj\" (UID: \"26cbeab5-89ba-425a-87c0-8797ceb6957a\") " pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.570829 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5d9d\" (UniqueName: \"kubernetes.io/projected/26cbeab5-89ba-425a-87c0-8797ceb6957a-kube-api-access-v5d9d\") pod \"metallb-operator-webhook-server-d677b6fd-mw5qj\" (UID: \"26cbeab5-89ba-425a-87c0-8797ceb6957a\") " pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.579310 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/26cbeab5-89ba-425a-87c0-8797ceb6957a-webhook-cert\") pod \"metallb-operator-webhook-server-d677b6fd-mw5qj\" (UID: \"26cbeab5-89ba-425a-87c0-8797ceb6957a\") " pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.581105 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/26cbeab5-89ba-425a-87c0-8797ceb6957a-apiservice-cert\") pod \"metallb-operator-webhook-server-d677b6fd-mw5qj\" (UID: \"26cbeab5-89ba-425a-87c0-8797ceb6957a\") " pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.595405 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5d9d\" (UniqueName: \"kubernetes.io/projected/26cbeab5-89ba-425a-87c0-8797ceb6957a-kube-api-access-v5d9d\") pod \"metallb-operator-webhook-server-d677b6fd-mw5qj\" (UID: \"26cbeab5-89ba-425a-87c0-8797ceb6957a\") " pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.832910 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" Dec 12 16:05:56 crc kubenswrapper[4693]: I1212 16:05:56.949564 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl"] Dec 12 16:05:57 crc kubenswrapper[4693]: I1212 16:05:57.090496 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl" event={"ID":"955c500c-bfaa-463d-b207-fcf0bd9bd9f2","Type":"ContainerStarted","Data":"ac3cb9e827641f9ee9a1d77fd87430b4de462adc82ecbcc810eb7503c57e4bcb"} Dec 12 16:05:57 crc kubenswrapper[4693]: I1212 16:05:57.324861 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj"] Dec 12 16:05:58 crc kubenswrapper[4693]: I1212 16:05:58.097946 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" event={"ID":"26cbeab5-89ba-425a-87c0-8797ceb6957a","Type":"ContainerStarted","Data":"85f7e0618a94980de8fcca6f426bccb5ff2204479ddccf6b33b4f7d81c9547d2"} Dec 12 16:06:02 crc kubenswrapper[4693]: I1212 16:06:02.136204 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" event={"ID":"26cbeab5-89ba-425a-87c0-8797ceb6957a","Type":"ContainerStarted","Data":"970ce2fc0dbc6fcc4e0fa4562e5b5d2591a444a6407defefa1819ef84f3286b7"} Dec 12 16:06:02 crc kubenswrapper[4693]: I1212 16:06:02.137194 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" Dec 12 16:06:02 crc kubenswrapper[4693]: I1212 16:06:02.137890 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl" event={"ID":"955c500c-bfaa-463d-b207-fcf0bd9bd9f2","Type":"ContainerStarted","Data":"fff1c22407f0b3d131b2d6713159653190cbfe07a582c046558bce4b498545d8"} Dec 12 16:06:02 crc kubenswrapper[4693]: I1212 16:06:02.138145 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl" Dec 12 16:06:02 crc kubenswrapper[4693]: I1212 16:06:02.163290 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" podStartSLOduration=1.963211561 podStartE2EDuration="6.163240047s" podCreationTimestamp="2025-12-12 16:05:56 +0000 UTC" firstStartedPulling="2025-12-12 16:05:57.340498301 +0000 UTC m=+1184.509137892" lastFinishedPulling="2025-12-12 16:06:01.540526777 +0000 UTC m=+1188.709166378" observedRunningTime="2025-12-12 16:06:02.160043071 +0000 UTC m=+1189.328682692" watchObservedRunningTime="2025-12-12 16:06:02.163240047 +0000 UTC m=+1189.331879658" Dec 12 16:06:02 crc kubenswrapper[4693]: I1212 16:06:02.193794 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl" podStartSLOduration=2.634589932 podStartE2EDuration="7.193771117s" podCreationTimestamp="2025-12-12 16:05:55 +0000 UTC" firstStartedPulling="2025-12-12 16:05:56.969525855 +0000 UTC m=+1184.138165456" lastFinishedPulling="2025-12-12 16:06:01.52870704 +0000 UTC m=+1188.697346641" observedRunningTime="2025-12-12 16:06:02.193744087 +0000 UTC m=+1189.362383708" watchObservedRunningTime="2025-12-12 16:06:02.193771117 +0000 UTC m=+1189.362410738" Dec 12 16:06:12 crc kubenswrapper[4693]: I1212 16:06:12.530821 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:06:12 crc kubenswrapper[4693]: I1212 16:06:12.531459 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:06:16 crc kubenswrapper[4693]: I1212 16:06:16.845051 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" Dec 12 16:06:36 crc kubenswrapper[4693]: I1212 16:06:36.330920 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.142510 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-7qqwq"] Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.146047 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.150029 4693 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-vvtgn" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.151440 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.151651 4693 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.155391 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn"] Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.156573 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.158207 4693 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.170515 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn"] Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.190015 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f24e6c5f-a222-44d6-8c2a-75b0d066e218-frr-conf\") pod \"frr-k8s-7qqwq\" (UID: \"f24e6c5f-a222-44d6-8c2a-75b0d066e218\") " pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.190100 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f24e6c5f-a222-44d6-8c2a-75b0d066e218-frr-startup\") pod \"frr-k8s-7qqwq\" (UID: \"f24e6c5f-a222-44d6-8c2a-75b0d066e218\") " pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.190141 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f24e6c5f-a222-44d6-8c2a-75b0d066e218-metrics\") pod \"frr-k8s-7qqwq\" (UID: \"f24e6c5f-a222-44d6-8c2a-75b0d066e218\") " pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.190177 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f24e6c5f-a222-44d6-8c2a-75b0d066e218-metrics-certs\") pod \"frr-k8s-7qqwq\" (UID: \"f24e6c5f-a222-44d6-8c2a-75b0d066e218\") " pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.190227 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs2gj\" (UniqueName: \"kubernetes.io/projected/f24e6c5f-a222-44d6-8c2a-75b0d066e218-kube-api-access-xs2gj\") pod \"frr-k8s-7qqwq\" (UID: \"f24e6c5f-a222-44d6-8c2a-75b0d066e218\") " pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.190252 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f24e6c5f-a222-44d6-8c2a-75b0d066e218-frr-sockets\") pod \"frr-k8s-7qqwq\" (UID: \"f24e6c5f-a222-44d6-8c2a-75b0d066e218\") " pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.190287 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f24e6c5f-a222-44d6-8c2a-75b0d066e218-reloader\") pod \"frr-k8s-7qqwq\" (UID: \"f24e6c5f-a222-44d6-8c2a-75b0d066e218\") " pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.290196 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-ndkbb"] Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.291533 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f24e6c5f-a222-44d6-8c2a-75b0d066e218-frr-conf\") pod \"frr-k8s-7qqwq\" (UID: \"f24e6c5f-a222-44d6-8c2a-75b0d066e218\") " pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.291590 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkvmh\" (UniqueName: \"kubernetes.io/projected/5a6d7731-5c1e-4b9b-b847-6deabf3f6af9-kube-api-access-lkvmh\") pod \"frr-k8s-webhook-server-7784b6fcf-mqxdn\" (UID: \"5a6d7731-5c1e-4b9b-b847-6deabf3f6af9\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.291636 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f24e6c5f-a222-44d6-8c2a-75b0d066e218-frr-startup\") pod \"frr-k8s-7qqwq\" (UID: \"f24e6c5f-a222-44d6-8c2a-75b0d066e218\") " pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.291669 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f24e6c5f-a222-44d6-8c2a-75b0d066e218-metrics\") pod \"frr-k8s-7qqwq\" (UID: \"f24e6c5f-a222-44d6-8c2a-75b0d066e218\") " pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.291695 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a6d7731-5c1e-4b9b-b847-6deabf3f6af9-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-mqxdn\" (UID: \"5a6d7731-5c1e-4b9b-b847-6deabf3f6af9\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.291715 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f24e6c5f-a222-44d6-8c2a-75b0d066e218-metrics-certs\") pod \"frr-k8s-7qqwq\" (UID: \"f24e6c5f-a222-44d6-8c2a-75b0d066e218\") " pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.291749 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs2gj\" (UniqueName: \"kubernetes.io/projected/f24e6c5f-a222-44d6-8c2a-75b0d066e218-kube-api-access-xs2gj\") pod \"frr-k8s-7qqwq\" (UID: \"f24e6c5f-a222-44d6-8c2a-75b0d066e218\") " pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.291768 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f24e6c5f-a222-44d6-8c2a-75b0d066e218-frr-sockets\") pod \"frr-k8s-7qqwq\" (UID: \"f24e6c5f-a222-44d6-8c2a-75b0d066e218\") " pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.291782 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f24e6c5f-a222-44d6-8c2a-75b0d066e218-reloader\") pod \"frr-k8s-7qqwq\" (UID: \"f24e6c5f-a222-44d6-8c2a-75b0d066e218\") " pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.292505 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f24e6c5f-a222-44d6-8c2a-75b0d066e218-frr-conf\") pod \"frr-k8s-7qqwq\" (UID: \"f24e6c5f-a222-44d6-8c2a-75b0d066e218\") " pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.292526 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f24e6c5f-a222-44d6-8c2a-75b0d066e218-reloader\") pod \"frr-k8s-7qqwq\" (UID: \"f24e6c5f-a222-44d6-8c2a-75b0d066e218\") " pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: E1212 16:06:37.292601 4693 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Dec 12 16:06:37 crc kubenswrapper[4693]: E1212 16:06:37.292656 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f24e6c5f-a222-44d6-8c2a-75b0d066e218-metrics-certs podName:f24e6c5f-a222-44d6-8c2a-75b0d066e218 nodeName:}" failed. No retries permitted until 2025-12-12 16:06:37.792636392 +0000 UTC m=+1224.961275993 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f24e6c5f-a222-44d6-8c2a-75b0d066e218-metrics-certs") pod "frr-k8s-7qqwq" (UID: "f24e6c5f-a222-44d6-8c2a-75b0d066e218") : secret "frr-k8s-certs-secret" not found Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.292900 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f24e6c5f-a222-44d6-8c2a-75b0d066e218-frr-sockets\") pod \"frr-k8s-7qqwq\" (UID: \"f24e6c5f-a222-44d6-8c2a-75b0d066e218\") " pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.293017 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f24e6c5f-a222-44d6-8c2a-75b0d066e218-metrics\") pod \"frr-k8s-7qqwq\" (UID: \"f24e6c5f-a222-44d6-8c2a-75b0d066e218\") " pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.293119 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f24e6c5f-a222-44d6-8c2a-75b0d066e218-frr-startup\") pod \"frr-k8s-7qqwq\" (UID: \"f24e6c5f-a222-44d6-8c2a-75b0d066e218\") " pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.298859 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-ndkbb" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.312711 4693 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-wlzmx" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.313552 4693 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.313548 4693 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.314084 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.314256 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-jdmtk"] Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.316014 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-jdmtk" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.322848 4693 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.326902 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs2gj\" (UniqueName: \"kubernetes.io/projected/f24e6c5f-a222-44d6-8c2a-75b0d066e218-kube-api-access-xs2gj\") pod \"frr-k8s-7qqwq\" (UID: \"f24e6c5f-a222-44d6-8c2a-75b0d066e218\") " pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.348749 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-jdmtk"] Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.393385 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a6d7731-5c1e-4b9b-b847-6deabf3f6af9-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-mqxdn\" (UID: \"5a6d7731-5c1e-4b9b-b847-6deabf3f6af9\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.393603 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a0fbfcb7-b516-452f-be80-ddd275ed0987-memberlist\") pod \"speaker-ndkbb\" (UID: \"a0fbfcb7-b516-452f-be80-ddd275ed0987\") " pod="metallb-system/speaker-ndkbb" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.393626 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a1424cfc-ad10-45e2-b69f-f313e29e5b58-metrics-certs\") pod \"controller-5bddd4b946-jdmtk\" (UID: \"a1424cfc-ad10-45e2-b69f-f313e29e5b58\") " pod="metallb-system/controller-5bddd4b946-jdmtk" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.393648 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shdff\" (UniqueName: \"kubernetes.io/projected/a1424cfc-ad10-45e2-b69f-f313e29e5b58-kube-api-access-shdff\") pod \"controller-5bddd4b946-jdmtk\" (UID: \"a1424cfc-ad10-45e2-b69f-f313e29e5b58\") " pod="metallb-system/controller-5bddd4b946-jdmtk" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.393670 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqmlg\" (UniqueName: \"kubernetes.io/projected/a0fbfcb7-b516-452f-be80-ddd275ed0987-kube-api-access-hqmlg\") pod \"speaker-ndkbb\" (UID: \"a0fbfcb7-b516-452f-be80-ddd275ed0987\") " pod="metallb-system/speaker-ndkbb" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.393729 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0fbfcb7-b516-452f-be80-ddd275ed0987-metrics-certs\") pod \"speaker-ndkbb\" (UID: \"a0fbfcb7-b516-452f-be80-ddd275ed0987\") " pod="metallb-system/speaker-ndkbb" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.393748 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1424cfc-ad10-45e2-b69f-f313e29e5b58-cert\") pod \"controller-5bddd4b946-jdmtk\" (UID: \"a1424cfc-ad10-45e2-b69f-f313e29e5b58\") " pod="metallb-system/controller-5bddd4b946-jdmtk" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.393765 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkvmh\" (UniqueName: \"kubernetes.io/projected/5a6d7731-5c1e-4b9b-b847-6deabf3f6af9-kube-api-access-lkvmh\") pod \"frr-k8s-webhook-server-7784b6fcf-mqxdn\" (UID: \"5a6d7731-5c1e-4b9b-b847-6deabf3f6af9\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.393783 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a0fbfcb7-b516-452f-be80-ddd275ed0987-metallb-excludel2\") pod \"speaker-ndkbb\" (UID: \"a0fbfcb7-b516-452f-be80-ddd275ed0987\") " pod="metallb-system/speaker-ndkbb" Dec 12 16:06:37 crc kubenswrapper[4693]: E1212 16:06:37.393546 4693 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 12 16:06:37 crc kubenswrapper[4693]: E1212 16:06:37.393898 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a6d7731-5c1e-4b9b-b847-6deabf3f6af9-cert podName:5a6d7731-5c1e-4b9b-b847-6deabf3f6af9 nodeName:}" failed. No retries permitted until 2025-12-12 16:06:37.893882592 +0000 UTC m=+1225.062522193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a6d7731-5c1e-4b9b-b847-6deabf3f6af9-cert") pod "frr-k8s-webhook-server-7784b6fcf-mqxdn" (UID: "5a6d7731-5c1e-4b9b-b847-6deabf3f6af9") : secret "frr-k8s-webhook-server-cert" not found Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.417032 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkvmh\" (UniqueName: \"kubernetes.io/projected/5a6d7731-5c1e-4b9b-b847-6deabf3f6af9-kube-api-access-lkvmh\") pod \"frr-k8s-webhook-server-7784b6fcf-mqxdn\" (UID: \"5a6d7731-5c1e-4b9b-b847-6deabf3f6af9\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.495087 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0fbfcb7-b516-452f-be80-ddd275ed0987-metrics-certs\") pod \"speaker-ndkbb\" (UID: \"a0fbfcb7-b516-452f-be80-ddd275ed0987\") " pod="metallb-system/speaker-ndkbb" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.495165 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1424cfc-ad10-45e2-b69f-f313e29e5b58-cert\") pod \"controller-5bddd4b946-jdmtk\" (UID: \"a1424cfc-ad10-45e2-b69f-f313e29e5b58\") " pod="metallb-system/controller-5bddd4b946-jdmtk" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.495199 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a0fbfcb7-b516-452f-be80-ddd275ed0987-metallb-excludel2\") pod \"speaker-ndkbb\" (UID: \"a0fbfcb7-b516-452f-be80-ddd275ed0987\") " pod="metallb-system/speaker-ndkbb" Dec 12 16:06:37 crc kubenswrapper[4693]: E1212 16:06:37.495253 4693 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.495324 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a0fbfcb7-b516-452f-be80-ddd275ed0987-memberlist\") pod \"speaker-ndkbb\" (UID: \"a0fbfcb7-b516-452f-be80-ddd275ed0987\") " pod="metallb-system/speaker-ndkbb" Dec 12 16:06:37 crc kubenswrapper[4693]: E1212 16:06:37.495387 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0fbfcb7-b516-452f-be80-ddd275ed0987-metrics-certs podName:a0fbfcb7-b516-452f-be80-ddd275ed0987 nodeName:}" failed. No retries permitted until 2025-12-12 16:06:37.995366939 +0000 UTC m=+1225.164006540 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a0fbfcb7-b516-452f-be80-ddd275ed0987-metrics-certs") pod "speaker-ndkbb" (UID: "a0fbfcb7-b516-452f-be80-ddd275ed0987") : secret "speaker-certs-secret" not found Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.495747 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a1424cfc-ad10-45e2-b69f-f313e29e5b58-metrics-certs\") pod \"controller-5bddd4b946-jdmtk\" (UID: \"a1424cfc-ad10-45e2-b69f-f313e29e5b58\") " pod="metallb-system/controller-5bddd4b946-jdmtk" Dec 12 16:06:37 crc kubenswrapper[4693]: E1212 16:06:37.495790 4693 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Dec 12 16:06:37 crc kubenswrapper[4693]: E1212 16:06:37.495847 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1424cfc-ad10-45e2-b69f-f313e29e5b58-metrics-certs podName:a1424cfc-ad10-45e2-b69f-f313e29e5b58 nodeName:}" failed. No retries permitted until 2025-12-12 16:06:37.995828631 +0000 UTC m=+1225.164468232 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a1424cfc-ad10-45e2-b69f-f313e29e5b58-metrics-certs") pod "controller-5bddd4b946-jdmtk" (UID: "a1424cfc-ad10-45e2-b69f-f313e29e5b58") : secret "controller-certs-secret" not found Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.495787 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shdff\" (UniqueName: \"kubernetes.io/projected/a1424cfc-ad10-45e2-b69f-f313e29e5b58-kube-api-access-shdff\") pod \"controller-5bddd4b946-jdmtk\" (UID: \"a1424cfc-ad10-45e2-b69f-f313e29e5b58\") " pod="metallb-system/controller-5bddd4b946-jdmtk" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.495896 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmlg\" (UniqueName: \"kubernetes.io/projected/a0fbfcb7-b516-452f-be80-ddd275ed0987-kube-api-access-hqmlg\") pod \"speaker-ndkbb\" (UID: \"a0fbfcb7-b516-452f-be80-ddd275ed0987\") " pod="metallb-system/speaker-ndkbb" Dec 12 16:06:37 crc kubenswrapper[4693]: E1212 16:06:37.495419 4693 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.496101 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a0fbfcb7-b516-452f-be80-ddd275ed0987-metallb-excludel2\") pod \"speaker-ndkbb\" (UID: \"a0fbfcb7-b516-452f-be80-ddd275ed0987\") " pod="metallb-system/speaker-ndkbb" Dec 12 16:06:37 crc kubenswrapper[4693]: E1212 16:06:37.496310 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0fbfcb7-b516-452f-be80-ddd275ed0987-memberlist podName:a0fbfcb7-b516-452f-be80-ddd275ed0987 nodeName:}" failed. No retries permitted until 2025-12-12 16:06:37.996242882 +0000 UTC m=+1225.164882483 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a0fbfcb7-b516-452f-be80-ddd275ed0987-memberlist") pod "speaker-ndkbb" (UID: "a0fbfcb7-b516-452f-be80-ddd275ed0987") : secret "metallb-memberlist" not found Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.498126 4693 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.510549 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1424cfc-ad10-45e2-b69f-f313e29e5b58-cert\") pod \"controller-5bddd4b946-jdmtk\" (UID: \"a1424cfc-ad10-45e2-b69f-f313e29e5b58\") " pod="metallb-system/controller-5bddd4b946-jdmtk" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.519804 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shdff\" (UniqueName: \"kubernetes.io/projected/a1424cfc-ad10-45e2-b69f-f313e29e5b58-kube-api-access-shdff\") pod \"controller-5bddd4b946-jdmtk\" (UID: \"a1424cfc-ad10-45e2-b69f-f313e29e5b58\") " pod="metallb-system/controller-5bddd4b946-jdmtk" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.520738 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqmlg\" (UniqueName: \"kubernetes.io/projected/a0fbfcb7-b516-452f-be80-ddd275ed0987-kube-api-access-hqmlg\") pod \"speaker-ndkbb\" (UID: \"a0fbfcb7-b516-452f-be80-ddd275ed0987\") " pod="metallb-system/speaker-ndkbb" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.801842 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f24e6c5f-a222-44d6-8c2a-75b0d066e218-metrics-certs\") pod \"frr-k8s-7qqwq\" (UID: \"f24e6c5f-a222-44d6-8c2a-75b0d066e218\") " pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.804931 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f24e6c5f-a222-44d6-8c2a-75b0d066e218-metrics-certs\") pod \"frr-k8s-7qqwq\" (UID: \"f24e6c5f-a222-44d6-8c2a-75b0d066e218\") " pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.903081 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a6d7731-5c1e-4b9b-b847-6deabf3f6af9-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-mqxdn\" (UID: \"5a6d7731-5c1e-4b9b-b847-6deabf3f6af9\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" Dec 12 16:06:37 crc kubenswrapper[4693]: I1212 16:06:37.908100 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a6d7731-5c1e-4b9b-b847-6deabf3f6af9-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-mqxdn\" (UID: \"5a6d7731-5c1e-4b9b-b847-6deabf3f6af9\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" Dec 12 16:06:38 crc kubenswrapper[4693]: I1212 16:06:38.004606 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0fbfcb7-b516-452f-be80-ddd275ed0987-metrics-certs\") pod \"speaker-ndkbb\" (UID: \"a0fbfcb7-b516-452f-be80-ddd275ed0987\") " pod="metallb-system/speaker-ndkbb" Dec 12 16:06:38 crc kubenswrapper[4693]: I1212 16:06:38.004781 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a0fbfcb7-b516-452f-be80-ddd275ed0987-memberlist\") pod \"speaker-ndkbb\" (UID: \"a0fbfcb7-b516-452f-be80-ddd275ed0987\") " pod="metallb-system/speaker-ndkbb" Dec 12 16:06:38 crc kubenswrapper[4693]: I1212 16:06:38.004815 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a1424cfc-ad10-45e2-b69f-f313e29e5b58-metrics-certs\") pod \"controller-5bddd4b946-jdmtk\" (UID: \"a1424cfc-ad10-45e2-b69f-f313e29e5b58\") " pod="metallb-system/controller-5bddd4b946-jdmtk" Dec 12 16:06:38 crc kubenswrapper[4693]: E1212 16:06:38.005005 4693 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 12 16:06:38 crc kubenswrapper[4693]: E1212 16:06:38.005114 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0fbfcb7-b516-452f-be80-ddd275ed0987-memberlist podName:a0fbfcb7-b516-452f-be80-ddd275ed0987 nodeName:}" failed. No retries permitted until 2025-12-12 16:06:39.005093203 +0000 UTC m=+1226.173732804 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a0fbfcb7-b516-452f-be80-ddd275ed0987-memberlist") pod "speaker-ndkbb" (UID: "a0fbfcb7-b516-452f-be80-ddd275ed0987") : secret "metallb-memberlist" not found Dec 12 16:06:38 crc kubenswrapper[4693]: I1212 16:06:38.009783 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0fbfcb7-b516-452f-be80-ddd275ed0987-metrics-certs\") pod \"speaker-ndkbb\" (UID: \"a0fbfcb7-b516-452f-be80-ddd275ed0987\") " pod="metallb-system/speaker-ndkbb" Dec 12 16:06:38 crc kubenswrapper[4693]: I1212 16:06:38.009896 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a1424cfc-ad10-45e2-b69f-f313e29e5b58-metrics-certs\") pod \"controller-5bddd4b946-jdmtk\" (UID: \"a1424cfc-ad10-45e2-b69f-f313e29e5b58\") " pod="metallb-system/controller-5bddd4b946-jdmtk" Dec 12 16:06:38 crc kubenswrapper[4693]: I1212 16:06:38.092649 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:06:38 crc kubenswrapper[4693]: I1212 16:06:38.121549 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" Dec 12 16:06:38 crc kubenswrapper[4693]: I1212 16:06:38.274981 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-jdmtk" Dec 12 16:06:38 crc kubenswrapper[4693]: I1212 16:06:38.535997 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn"] Dec 12 16:06:38 crc kubenswrapper[4693]: W1212 16:06:38.539520 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a6d7731_5c1e_4b9b_b847_6deabf3f6af9.slice/crio-51a4b0d8f42434ad8830801764923c00251d6ac1310728aaf28fe1d692cd8d32 WatchSource:0}: Error finding container 51a4b0d8f42434ad8830801764923c00251d6ac1310728aaf28fe1d692cd8d32: Status 404 returned error can't find the container with id 51a4b0d8f42434ad8830801764923c00251d6ac1310728aaf28fe1d692cd8d32 Dec 12 16:06:38 crc kubenswrapper[4693]: I1212 16:06:38.709709 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-jdmtk"] Dec 12 16:06:39 crc kubenswrapper[4693]: I1212 16:06:39.021873 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a0fbfcb7-b516-452f-be80-ddd275ed0987-memberlist\") pod \"speaker-ndkbb\" (UID: \"a0fbfcb7-b516-452f-be80-ddd275ed0987\") " pod="metallb-system/speaker-ndkbb" Dec 12 16:06:39 crc kubenswrapper[4693]: I1212 16:06:39.030008 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a0fbfcb7-b516-452f-be80-ddd275ed0987-memberlist\") pod \"speaker-ndkbb\" (UID: \"a0fbfcb7-b516-452f-be80-ddd275ed0987\") " pod="metallb-system/speaker-ndkbb" Dec 12 16:06:39 crc kubenswrapper[4693]: I1212 16:06:39.161675 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-ndkbb" Dec 12 16:06:39 crc kubenswrapper[4693]: W1212 16:06:39.180909 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0fbfcb7_b516_452f_be80_ddd275ed0987.slice/crio-76ffd79ef63667cd6f18aee331ed00124586df6effa3b545c7ca391512718d30 WatchSource:0}: Error finding container 76ffd79ef63667cd6f18aee331ed00124586df6effa3b545c7ca391512718d30: Status 404 returned error can't find the container with id 76ffd79ef63667cd6f18aee331ed00124586df6effa3b545c7ca391512718d30 Dec 12 16:06:39 crc kubenswrapper[4693]: I1212 16:06:39.486286 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" event={"ID":"5a6d7731-5c1e-4b9b-b847-6deabf3f6af9","Type":"ContainerStarted","Data":"51a4b0d8f42434ad8830801764923c00251d6ac1310728aaf28fe1d692cd8d32"} Dec 12 16:06:39 crc kubenswrapper[4693]: I1212 16:06:39.488135 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qqwq" event={"ID":"f24e6c5f-a222-44d6-8c2a-75b0d066e218","Type":"ContainerStarted","Data":"a169d9f7c597f7a357138da16cae93f687c4c81ff686f676f61e89866bd83719"} Dec 12 16:06:39 crc kubenswrapper[4693]: I1212 16:06:39.493855 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-jdmtk" event={"ID":"a1424cfc-ad10-45e2-b69f-f313e29e5b58","Type":"ContainerStarted","Data":"bd9e4266b7617f1144c855b256e414ba052dc0bfb4ee53ab8952214200573db9"} Dec 12 16:06:39 crc kubenswrapper[4693]: I1212 16:06:39.493906 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-jdmtk" event={"ID":"a1424cfc-ad10-45e2-b69f-f313e29e5b58","Type":"ContainerStarted","Data":"9f38dfa2e4b74dda450b0d2fe8df352af90bd83b0df06cf34ec4df4d98ed3127"} Dec 12 16:06:39 crc kubenswrapper[4693]: I1212 16:06:39.493922 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-jdmtk" event={"ID":"a1424cfc-ad10-45e2-b69f-f313e29e5b58","Type":"ContainerStarted","Data":"ec12e1368dde97016e1c09a627c17a70acc21683055600fe3164458d40ba3ae1"} Dec 12 16:06:39 crc kubenswrapper[4693]: I1212 16:06:39.493947 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-jdmtk" Dec 12 16:06:39 crc kubenswrapper[4693]: I1212 16:06:39.496913 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ndkbb" event={"ID":"a0fbfcb7-b516-452f-be80-ddd275ed0987","Type":"ContainerStarted","Data":"231ef294eed60ec252f71a59ce58051eef244007adb124dd46f77c91963b90f9"} Dec 12 16:06:39 crc kubenswrapper[4693]: I1212 16:06:39.496959 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ndkbb" event={"ID":"a0fbfcb7-b516-452f-be80-ddd275ed0987","Type":"ContainerStarted","Data":"76ffd79ef63667cd6f18aee331ed00124586df6effa3b545c7ca391512718d30"} Dec 12 16:06:39 crc kubenswrapper[4693]: I1212 16:06:39.519645 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-jdmtk" podStartSLOduration=2.519626742 podStartE2EDuration="2.519626742s" podCreationTimestamp="2025-12-12 16:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:06:39.517375402 +0000 UTC m=+1226.686015003" watchObservedRunningTime="2025-12-12 16:06:39.519626742 +0000 UTC m=+1226.688266343" Dec 12 16:06:40 crc kubenswrapper[4693]: I1212 16:06:40.508603 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ndkbb" event={"ID":"a0fbfcb7-b516-452f-be80-ddd275ed0987","Type":"ContainerStarted","Data":"8e831b0801e2fb23bfb24cbab83556caa2257935aea77a5db23b897f04833d6f"} Dec 12 16:06:40 crc kubenswrapper[4693]: I1212 16:06:40.534543 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-ndkbb" podStartSLOduration=3.534521948 podStartE2EDuration="3.534521948s" podCreationTimestamp="2025-12-12 16:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:06:40.52789257 +0000 UTC m=+1227.696532171" watchObservedRunningTime="2025-12-12 16:06:40.534521948 +0000 UTC m=+1227.703161569" Dec 12 16:06:41 crc kubenswrapper[4693]: I1212 16:06:41.521213 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-ndkbb" Dec 12 16:06:42 crc kubenswrapper[4693]: I1212 16:06:42.530327 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:06:42 crc kubenswrapper[4693]: I1212 16:06:42.530610 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:06:42 crc kubenswrapper[4693]: I1212 16:06:42.530662 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 16:06:42 crc kubenswrapper[4693]: I1212 16:06:42.531338 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9b99609eca8bf887c0f086d452cd1f8437812e8c5e6edb0ab2c3a059f6382847"} pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:06:42 crc kubenswrapper[4693]: I1212 16:06:42.531414 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" containerID="cri-o://9b99609eca8bf887c0f086d452cd1f8437812e8c5e6edb0ab2c3a059f6382847" gracePeriod=600 Dec 12 16:06:43 crc kubenswrapper[4693]: I1212 16:06:43.540532 4693 generic.go:334] "Generic (PLEG): container finished" podID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerID="9b99609eca8bf887c0f086d452cd1f8437812e8c5e6edb0ab2c3a059f6382847" exitCode=0 Dec 12 16:06:43 crc kubenswrapper[4693]: I1212 16:06:43.540584 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerDied","Data":"9b99609eca8bf887c0f086d452cd1f8437812e8c5e6edb0ab2c3a059f6382847"} Dec 12 16:06:43 crc kubenswrapper[4693]: I1212 16:06:43.540626 4693 scope.go:117] "RemoveContainer" containerID="6f8076bbaf9c92a7134e9ae28b9eeeb9f0776e367f05e17a692efcc2523d8648" Dec 12 16:06:44 crc kubenswrapper[4693]: I1212 16:06:44.553452 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerStarted","Data":"fa5a22453d813e4ad162e4fc8b28463dbad032801eec3a25e1c47d7ec02c9b9a"} Dec 12 16:06:48 crc kubenswrapper[4693]: I1212 16:06:48.279240 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-jdmtk" Dec 12 16:06:49 crc kubenswrapper[4693]: I1212 16:06:49.165229 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-ndkbb" Dec 12 16:06:52 crc kubenswrapper[4693]: I1212 16:06:52.172822 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-jwqcs"] Dec 12 16:06:52 crc kubenswrapper[4693]: I1212 16:06:52.174051 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jwqcs" Dec 12 16:06:52 crc kubenswrapper[4693]: I1212 16:06:52.176637 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 12 16:06:52 crc kubenswrapper[4693]: I1212 16:06:52.176957 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 12 16:06:52 crc kubenswrapper[4693]: I1212 16:06:52.177309 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-jc2tk" Dec 12 16:06:52 crc kubenswrapper[4693]: I1212 16:06:52.194128 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jwqcs"] Dec 12 16:06:52 crc kubenswrapper[4693]: I1212 16:06:52.276421 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnbnl\" (UniqueName: \"kubernetes.io/projected/01e7fc7f-62b5-4a37-921e-beb41af3419b-kube-api-access-dnbnl\") pod \"openstack-operator-index-jwqcs\" (UID: \"01e7fc7f-62b5-4a37-921e-beb41af3419b\") " pod="openstack-operators/openstack-operator-index-jwqcs" Dec 12 16:06:52 crc kubenswrapper[4693]: I1212 16:06:52.379401 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnbnl\" (UniqueName: \"kubernetes.io/projected/01e7fc7f-62b5-4a37-921e-beb41af3419b-kube-api-access-dnbnl\") pod \"openstack-operator-index-jwqcs\" (UID: \"01e7fc7f-62b5-4a37-921e-beb41af3419b\") " pod="openstack-operators/openstack-operator-index-jwqcs" Dec 12 16:06:52 crc kubenswrapper[4693]: I1212 16:06:52.409695 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnbnl\" (UniqueName: \"kubernetes.io/projected/01e7fc7f-62b5-4a37-921e-beb41af3419b-kube-api-access-dnbnl\") pod \"openstack-operator-index-jwqcs\" (UID: \"01e7fc7f-62b5-4a37-921e-beb41af3419b\") " pod="openstack-operators/openstack-operator-index-jwqcs" Dec 12 16:06:52 crc kubenswrapper[4693]: I1212 16:06:52.498792 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jwqcs" Dec 12 16:06:52 crc kubenswrapper[4693]: E1212 16:06:52.725116 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/frr-rhel9@sha256:68e021ee9884e51f1cdee65ef5b86f64eea17344477990b4ba8bd52a1581e7cd" Dec 12 16:06:52 crc kubenswrapper[4693]: E1212 16:06:52.725579 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:frr-k8s-webhook-server,Image:registry.redhat.io/openshift4/frr-rhel9@sha256:68e021ee9884e51f1cdee65ef5b86f64eea17344477990b4ba8bd52a1581e7cd,Command:[/frr-k8s],Args:[--log-level=debug --webhook-mode=onlywebhook --disable-cert-rotation=true --namespace=$(NAMESPACE) --metrics-bind-address=:7572],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:monitoring,HostPort:0,ContainerPort:7572,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lkvmh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:{1 0 monitoring},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:{1 0 monitoring},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000730000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod frr-k8s-webhook-server-7784b6fcf-mqxdn_metallb-system(5a6d7731-5c1e-4b9b-b847-6deabf3f6af9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 16:06:52 crc kubenswrapper[4693]: E1212 16:06:52.726974 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" podUID="5a6d7731-5c1e-4b9b-b847-6deabf3f6af9" Dec 12 16:06:52 crc kubenswrapper[4693]: I1212 16:06:52.988845 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jwqcs"] Dec 12 16:06:53 crc kubenswrapper[4693]: E1212 16:06:53.200842 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/frr-rhel9@sha256:68e021ee9884e51f1cdee65ef5b86f64eea17344477990b4ba8bd52a1581e7cd" Dec 12 16:06:53 crc kubenswrapper[4693]: E1212 16:06:53.200998 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:cp-frr-files,Image:registry.redhat.io/openshift4/frr-rhel9@sha256:68e021ee9884e51f1cdee65ef5b86f64eea17344477990b4ba8bd52a1581e7cd,Command:[/bin/sh -c cp -rLf /tmp/frr/* /etc/frr/],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:frr-startup,ReadOnly:false,MountPath:/tmp/frr,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:frr-conf,ReadOnly:false,MountPath:/etc/frr,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xs2gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*100,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*101,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod frr-k8s-7qqwq_metallb-system(f24e6c5f-a222-44d6-8c2a-75b0d066e218): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 16:06:53 crc kubenswrapper[4693]: E1212 16:06:53.202157 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="metallb-system/frr-k8s-7qqwq" podUID="f24e6c5f-a222-44d6-8c2a-75b0d066e218" Dec 12 16:06:53 crc kubenswrapper[4693]: I1212 16:06:53.637823 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jwqcs" event={"ID":"01e7fc7f-62b5-4a37-921e-beb41af3419b","Type":"ContainerStarted","Data":"96fecf35bfc9f45d9c984e6a21d0f83ea5a3bb427576af36ff7d0d6b4572ba92"} Dec 12 16:06:53 crc kubenswrapper[4693]: E1212 16:06:53.641352 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:68e021ee9884e51f1cdee65ef5b86f64eea17344477990b4ba8bd52a1581e7cd\\\"\"" pod="metallb-system/frr-k8s-7qqwq" podUID="f24e6c5f-a222-44d6-8c2a-75b0d066e218" Dec 12 16:06:53 crc kubenswrapper[4693]: E1212 16:06:53.641376 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:68e021ee9884e51f1cdee65ef5b86f64eea17344477990b4ba8bd52a1581e7cd\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" podUID="5a6d7731-5c1e-4b9b-b847-6deabf3f6af9" Dec 12 16:06:56 crc kubenswrapper[4693]: I1212 16:06:56.537964 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-jwqcs"] Dec 12 16:06:57 crc kubenswrapper[4693]: I1212 16:06:57.152362 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-f2llf"] Dec 12 16:06:57 crc kubenswrapper[4693]: I1212 16:06:57.171829 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-f2llf"] Dec 12 16:06:57 crc kubenswrapper[4693]: I1212 16:06:57.171973 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f2llf" Dec 12 16:06:57 crc kubenswrapper[4693]: I1212 16:06:57.280550 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pkwj\" (UniqueName: \"kubernetes.io/projected/19868aeb-2fda-43a6-8801-7d72c8465394-kube-api-access-5pkwj\") pod \"openstack-operator-index-f2llf\" (UID: \"19868aeb-2fda-43a6-8801-7d72c8465394\") " pod="openstack-operators/openstack-operator-index-f2llf" Dec 12 16:06:57 crc kubenswrapper[4693]: I1212 16:06:57.381820 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pkwj\" (UniqueName: \"kubernetes.io/projected/19868aeb-2fda-43a6-8801-7d72c8465394-kube-api-access-5pkwj\") pod \"openstack-operator-index-f2llf\" (UID: \"19868aeb-2fda-43a6-8801-7d72c8465394\") " pod="openstack-operators/openstack-operator-index-f2llf" Dec 12 16:06:57 crc kubenswrapper[4693]: I1212 16:06:57.401654 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pkwj\" (UniqueName: \"kubernetes.io/projected/19868aeb-2fda-43a6-8801-7d72c8465394-kube-api-access-5pkwj\") pod \"openstack-operator-index-f2llf\" (UID: \"19868aeb-2fda-43a6-8801-7d72c8465394\") " pod="openstack-operators/openstack-operator-index-f2llf" Dec 12 16:06:57 crc kubenswrapper[4693]: I1212 16:06:57.503086 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f2llf" Dec 12 16:06:57 crc kubenswrapper[4693]: I1212 16:06:57.901932 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-f2llf"] Dec 12 16:06:57 crc kubenswrapper[4693]: W1212 16:06:57.918862 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19868aeb_2fda_43a6_8801_7d72c8465394.slice/crio-b7b2420ac42b29a5e1d226b49996acc226575cae549111a63f9662e765d32649 WatchSource:0}: Error finding container b7b2420ac42b29a5e1d226b49996acc226575cae549111a63f9662e765d32649: Status 404 returned error can't find the container with id b7b2420ac42b29a5e1d226b49996acc226575cae549111a63f9662e765d32649 Dec 12 16:06:58 crc kubenswrapper[4693]: I1212 16:06:58.680214 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f2llf" event={"ID":"19868aeb-2fda-43a6-8801-7d72c8465394","Type":"ContainerStarted","Data":"b7b2420ac42b29a5e1d226b49996acc226575cae549111a63f9662e765d32649"} Dec 12 16:07:09 crc kubenswrapper[4693]: I1212 16:07:09.796464 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f2llf" event={"ID":"19868aeb-2fda-43a6-8801-7d72c8465394","Type":"ContainerStarted","Data":"72084ee966659b77cd3a66842b1c8f2883e459499e08f949184fd9e6751c96ea"} Dec 12 16:07:09 crc kubenswrapper[4693]: I1212 16:07:09.798049 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" event={"ID":"5a6d7731-5c1e-4b9b-b847-6deabf3f6af9","Type":"ContainerStarted","Data":"e1c262531dfecff5b7c6af1ae256cd13df8258d3b62f310fcb97d8e08ffe7370"} Dec 12 16:07:09 crc kubenswrapper[4693]: I1212 16:07:09.798314 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" Dec 12 16:07:09 crc kubenswrapper[4693]: I1212 16:07:09.799053 4693 generic.go:334] "Generic (PLEG): container finished" podID="f24e6c5f-a222-44d6-8c2a-75b0d066e218" containerID="7dc3798569353cfc1a973fc75edf2d282164ee9b6eaf11fa3b02410969bbfa3f" exitCode=0 Dec 12 16:07:09 crc kubenswrapper[4693]: I1212 16:07:09.799113 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qqwq" event={"ID":"f24e6c5f-a222-44d6-8c2a-75b0d066e218","Type":"ContainerDied","Data":"7dc3798569353cfc1a973fc75edf2d282164ee9b6eaf11fa3b02410969bbfa3f"} Dec 12 16:07:09 crc kubenswrapper[4693]: I1212 16:07:09.801115 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jwqcs" event={"ID":"01e7fc7f-62b5-4a37-921e-beb41af3419b","Type":"ContainerStarted","Data":"b4deb5aea30f8ab43818a50102d17bd858460c0087ae21f029423112e36ba764"} Dec 12 16:07:09 crc kubenswrapper[4693]: I1212 16:07:09.822898 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" podStartSLOduration=2.43375826 podStartE2EDuration="32.82287738s" podCreationTimestamp="2025-12-12 16:06:37 +0000 UTC" firstStartedPulling="2025-12-12 16:06:38.541901025 +0000 UTC m=+1225.710540626" lastFinishedPulling="2025-12-12 16:07:08.931020145 +0000 UTC m=+1256.099659746" observedRunningTime="2025-12-12 16:07:09.815124342 +0000 UTC m=+1256.983763943" watchObservedRunningTime="2025-12-12 16:07:09.82287738 +0000 UTC m=+1256.991516981" Dec 12 16:07:10 crc kubenswrapper[4693]: I1212 16:07:10.810715 4693 generic.go:334] "Generic (PLEG): container finished" podID="f24e6c5f-a222-44d6-8c2a-75b0d066e218" containerID="1db4912fbb27627a0650141582f42452b67d8234d4218e1ef7c0a7ab34bd7dd8" exitCode=0 Dec 12 16:07:10 crc kubenswrapper[4693]: I1212 16:07:10.812075 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qqwq" event={"ID":"f24e6c5f-a222-44d6-8c2a-75b0d066e218","Type":"ContainerDied","Data":"1db4912fbb27627a0650141582f42452b67d8234d4218e1ef7c0a7ab34bd7dd8"} Dec 12 16:07:10 crc kubenswrapper[4693]: I1212 16:07:10.813303 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-jwqcs" podUID="01e7fc7f-62b5-4a37-921e-beb41af3419b" containerName="registry-server" containerID="cri-o://b4deb5aea30f8ab43818a50102d17bd858460c0087ae21f029423112e36ba764" gracePeriod=2 Dec 12 16:07:10 crc kubenswrapper[4693]: I1212 16:07:10.830842 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-f2llf" podStartSLOduration=2.823592258 podStartE2EDuration="13.830816934s" podCreationTimestamp="2025-12-12 16:06:57 +0000 UTC" firstStartedPulling="2025-12-12 16:06:57.920743307 +0000 UTC m=+1245.089382908" lastFinishedPulling="2025-12-12 16:07:08.927967983 +0000 UTC m=+1256.096607584" observedRunningTime="2025-12-12 16:07:10.82543648 +0000 UTC m=+1257.994076091" watchObservedRunningTime="2025-12-12 16:07:10.830816934 +0000 UTC m=+1257.999456535" Dec 12 16:07:10 crc kubenswrapper[4693]: I1212 16:07:10.864055 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-jwqcs" podStartSLOduration=3.058580557 podStartE2EDuration="18.864040237s" podCreationTimestamp="2025-12-12 16:06:52 +0000 UTC" firstStartedPulling="2025-12-12 16:06:52.991876843 +0000 UTC m=+1240.160516444" lastFinishedPulling="2025-12-12 16:07:08.797336523 +0000 UTC m=+1255.965976124" observedRunningTime="2025-12-12 16:07:10.859768802 +0000 UTC m=+1258.028408423" watchObservedRunningTime="2025-12-12 16:07:10.864040237 +0000 UTC m=+1258.032679838" Dec 12 16:07:11 crc kubenswrapper[4693]: I1212 16:07:11.820739 4693 generic.go:334] "Generic (PLEG): container finished" podID="01e7fc7f-62b5-4a37-921e-beb41af3419b" containerID="b4deb5aea30f8ab43818a50102d17bd858460c0087ae21f029423112e36ba764" exitCode=0 Dec 12 16:07:11 crc kubenswrapper[4693]: I1212 16:07:11.820849 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jwqcs" event={"ID":"01e7fc7f-62b5-4a37-921e-beb41af3419b","Type":"ContainerDied","Data":"b4deb5aea30f8ab43818a50102d17bd858460c0087ae21f029423112e36ba764"} Dec 12 16:07:11 crc kubenswrapper[4693]: I1212 16:07:11.821658 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jwqcs" event={"ID":"01e7fc7f-62b5-4a37-921e-beb41af3419b","Type":"ContainerDied","Data":"96fecf35bfc9f45d9c984e6a21d0f83ea5a3bb427576af36ff7d0d6b4572ba92"} Dec 12 16:07:11 crc kubenswrapper[4693]: I1212 16:07:11.821687 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96fecf35bfc9f45d9c984e6a21d0f83ea5a3bb427576af36ff7d0d6b4572ba92" Dec 12 16:07:11 crc kubenswrapper[4693]: I1212 16:07:11.824947 4693 generic.go:334] "Generic (PLEG): container finished" podID="f24e6c5f-a222-44d6-8c2a-75b0d066e218" containerID="d5ccfa5819df0fef7901cc3d99901b1f729d5cebc3786a634ed18e79f0de26a8" exitCode=0 Dec 12 16:07:11 crc kubenswrapper[4693]: I1212 16:07:11.824983 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qqwq" event={"ID":"f24e6c5f-a222-44d6-8c2a-75b0d066e218","Type":"ContainerDied","Data":"d5ccfa5819df0fef7901cc3d99901b1f729d5cebc3786a634ed18e79f0de26a8"} Dec 12 16:07:11 crc kubenswrapper[4693]: I1212 16:07:11.901649 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jwqcs" Dec 12 16:07:12 crc kubenswrapper[4693]: I1212 16:07:12.063849 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnbnl\" (UniqueName: \"kubernetes.io/projected/01e7fc7f-62b5-4a37-921e-beb41af3419b-kube-api-access-dnbnl\") pod \"01e7fc7f-62b5-4a37-921e-beb41af3419b\" (UID: \"01e7fc7f-62b5-4a37-921e-beb41af3419b\") " Dec 12 16:07:12 crc kubenswrapper[4693]: I1212 16:07:12.075965 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e7fc7f-62b5-4a37-921e-beb41af3419b-kube-api-access-dnbnl" (OuterVolumeSpecName: "kube-api-access-dnbnl") pod "01e7fc7f-62b5-4a37-921e-beb41af3419b" (UID: "01e7fc7f-62b5-4a37-921e-beb41af3419b"). InnerVolumeSpecName "kube-api-access-dnbnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:07:12 crc kubenswrapper[4693]: I1212 16:07:12.167898 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnbnl\" (UniqueName: \"kubernetes.io/projected/01e7fc7f-62b5-4a37-921e-beb41af3419b-kube-api-access-dnbnl\") on node \"crc\" DevicePath \"\"" Dec 12 16:07:12 crc kubenswrapper[4693]: I1212 16:07:12.840208 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jwqcs" Dec 12 16:07:12 crc kubenswrapper[4693]: I1212 16:07:12.843547 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qqwq" event={"ID":"f24e6c5f-a222-44d6-8c2a-75b0d066e218","Type":"ContainerStarted","Data":"20587660f0b1976afd8b689326d141a7a9ae1e6c7871d869a9aef05662aa3318"} Dec 12 16:07:12 crc kubenswrapper[4693]: I1212 16:07:12.843642 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qqwq" event={"ID":"f24e6c5f-a222-44d6-8c2a-75b0d066e218","Type":"ContainerStarted","Data":"020a118cda47c4339ba0ef9dd6d3d3b139bcef20bca409e55f2a7cd0885112e1"} Dec 12 16:07:12 crc kubenswrapper[4693]: I1212 16:07:12.888897 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-jwqcs"] Dec 12 16:07:12 crc kubenswrapper[4693]: I1212 16:07:12.901833 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-jwqcs"] Dec 12 16:07:13 crc kubenswrapper[4693]: I1212 16:07:13.371906 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01e7fc7f-62b5-4a37-921e-beb41af3419b" path="/var/lib/kubelet/pods/01e7fc7f-62b5-4a37-921e-beb41af3419b/volumes" Dec 12 16:07:13 crc kubenswrapper[4693]: I1212 16:07:13.851559 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qqwq" event={"ID":"f24e6c5f-a222-44d6-8c2a-75b0d066e218","Type":"ContainerStarted","Data":"ffc8a4cfaa81aab4ca521f7d0690fe13986834ed0b992803c8ad0e3c309a6e38"} Dec 12 16:07:13 crc kubenswrapper[4693]: I1212 16:07:13.851795 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qqwq" event={"ID":"f24e6c5f-a222-44d6-8c2a-75b0d066e218","Type":"ContainerStarted","Data":"88f87653c80d5d98c7fe7f747f819bf8d9c795ea58fdadb6cfab61e2ebab8e65"} Dec 12 16:07:13 crc kubenswrapper[4693]: I1212 16:07:13.851805 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qqwq" event={"ID":"f24e6c5f-a222-44d6-8c2a-75b0d066e218","Type":"ContainerStarted","Data":"91b1d1de550dc728eb8910f4046bddab227e13a6008f556daa10e2432f6d5383"} Dec 12 16:07:14 crc kubenswrapper[4693]: I1212 16:07:14.863418 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qqwq" event={"ID":"f24e6c5f-a222-44d6-8c2a-75b0d066e218","Type":"ContainerStarted","Data":"b04976b673b0c16b1c50ee1b5acb777229c3d2b458b99cb6b1a12f0ffa9dddff"} Dec 12 16:07:14 crc kubenswrapper[4693]: I1212 16:07:14.863614 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:07:14 crc kubenswrapper[4693]: I1212 16:07:14.891015 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-7qqwq" podStartSLOduration=7.431813785 podStartE2EDuration="37.890999317s" podCreationTimestamp="2025-12-12 16:06:37 +0000 UTC" firstStartedPulling="2025-12-12 16:06:38.468893464 +0000 UTC m=+1225.637533065" lastFinishedPulling="2025-12-12 16:07:08.928078986 +0000 UTC m=+1256.096718597" observedRunningTime="2025-12-12 16:07:14.886059894 +0000 UTC m=+1262.054699525" watchObservedRunningTime="2025-12-12 16:07:14.890999317 +0000 UTC m=+1262.059638918" Dec 12 16:07:17 crc kubenswrapper[4693]: I1212 16:07:17.504257 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-f2llf" Dec 12 16:07:17 crc kubenswrapper[4693]: I1212 16:07:17.506825 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-f2llf" Dec 12 16:07:17 crc kubenswrapper[4693]: I1212 16:07:17.552149 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-f2llf" Dec 12 16:07:17 crc kubenswrapper[4693]: I1212 16:07:17.918564 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-f2llf" Dec 12 16:07:18 crc kubenswrapper[4693]: I1212 16:07:18.093600 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:07:18 crc kubenswrapper[4693]: I1212 16:07:18.128431 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:07:24 crc kubenswrapper[4693]: I1212 16:07:24.797991 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd"] Dec 12 16:07:24 crc kubenswrapper[4693]: E1212 16:07:24.798895 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e7fc7f-62b5-4a37-921e-beb41af3419b" containerName="registry-server" Dec 12 16:07:24 crc kubenswrapper[4693]: I1212 16:07:24.798913 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e7fc7f-62b5-4a37-921e-beb41af3419b" containerName="registry-server" Dec 12 16:07:24 crc kubenswrapper[4693]: I1212 16:07:24.799091 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="01e7fc7f-62b5-4a37-921e-beb41af3419b" containerName="registry-server" Dec 12 16:07:24 crc kubenswrapper[4693]: I1212 16:07:24.800168 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd" Dec 12 16:07:24 crc kubenswrapper[4693]: I1212 16:07:24.802265 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-r2lps" Dec 12 16:07:24 crc kubenswrapper[4693]: I1212 16:07:24.811031 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd"] Dec 12 16:07:24 crc kubenswrapper[4693]: I1212 16:07:24.899786 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/600ce031-56d1-4574-adb3-72be99bca3fa-bundle\") pod \"e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd\" (UID: \"600ce031-56d1-4574-adb3-72be99bca3fa\") " pod="openstack-operators/e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd" Dec 12 16:07:24 crc kubenswrapper[4693]: I1212 16:07:24.900033 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpwtz\" (UniqueName: \"kubernetes.io/projected/600ce031-56d1-4574-adb3-72be99bca3fa-kube-api-access-lpwtz\") pod \"e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd\" (UID: \"600ce031-56d1-4574-adb3-72be99bca3fa\") " pod="openstack-operators/e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd" Dec 12 16:07:24 crc kubenswrapper[4693]: I1212 16:07:24.900084 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/600ce031-56d1-4574-adb3-72be99bca3fa-util\") pod \"e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd\" (UID: \"600ce031-56d1-4574-adb3-72be99bca3fa\") " pod="openstack-operators/e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd" Dec 12 16:07:25 crc kubenswrapper[4693]: I1212 16:07:25.002432 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpwtz\" (UniqueName: \"kubernetes.io/projected/600ce031-56d1-4574-adb3-72be99bca3fa-kube-api-access-lpwtz\") pod \"e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd\" (UID: \"600ce031-56d1-4574-adb3-72be99bca3fa\") " pod="openstack-operators/e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd" Dec 12 16:07:25 crc kubenswrapper[4693]: I1212 16:07:25.002550 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/600ce031-56d1-4574-adb3-72be99bca3fa-util\") pod \"e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd\" (UID: \"600ce031-56d1-4574-adb3-72be99bca3fa\") " pod="openstack-operators/e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd" Dec 12 16:07:25 crc kubenswrapper[4693]: I1212 16:07:25.002606 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/600ce031-56d1-4574-adb3-72be99bca3fa-bundle\") pod \"e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd\" (UID: \"600ce031-56d1-4574-adb3-72be99bca3fa\") " pod="openstack-operators/e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd" Dec 12 16:07:25 crc kubenswrapper[4693]: I1212 16:07:25.003383 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/600ce031-56d1-4574-adb3-72be99bca3fa-bundle\") pod \"e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd\" (UID: \"600ce031-56d1-4574-adb3-72be99bca3fa\") " pod="openstack-operators/e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd" Dec 12 16:07:25 crc kubenswrapper[4693]: I1212 16:07:25.003467 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/600ce031-56d1-4574-adb3-72be99bca3fa-util\") pod \"e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd\" (UID: \"600ce031-56d1-4574-adb3-72be99bca3fa\") " pod="openstack-operators/e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd" Dec 12 16:07:25 crc kubenswrapper[4693]: I1212 16:07:25.028994 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpwtz\" (UniqueName: \"kubernetes.io/projected/600ce031-56d1-4574-adb3-72be99bca3fa-kube-api-access-lpwtz\") pod \"e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd\" (UID: \"600ce031-56d1-4574-adb3-72be99bca3fa\") " pod="openstack-operators/e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd" Dec 12 16:07:25 crc kubenswrapper[4693]: I1212 16:07:25.161767 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd" Dec 12 16:07:25 crc kubenswrapper[4693]: I1212 16:07:25.609229 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd"] Dec 12 16:07:25 crc kubenswrapper[4693]: I1212 16:07:25.966178 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd" event={"ID":"600ce031-56d1-4574-adb3-72be99bca3fa","Type":"ContainerStarted","Data":"7a42f590373603109bbaa1445beed47797fc6d610afee2f81806f536dbbedc45"} Dec 12 16:07:25 crc kubenswrapper[4693]: I1212 16:07:25.966225 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd" event={"ID":"600ce031-56d1-4574-adb3-72be99bca3fa","Type":"ContainerStarted","Data":"8b04cefadf3e9e43816925af368128906d446ae338acd872ee4d686021b30867"} Dec 12 16:07:26 crc kubenswrapper[4693]: I1212 16:07:26.987414 4693 generic.go:334] "Generic (PLEG): container finished" podID="600ce031-56d1-4574-adb3-72be99bca3fa" containerID="7a42f590373603109bbaa1445beed47797fc6d610afee2f81806f536dbbedc45" exitCode=0 Dec 12 16:07:26 crc kubenswrapper[4693]: I1212 16:07:26.987546 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd" event={"ID":"600ce031-56d1-4574-adb3-72be99bca3fa","Type":"ContainerDied","Data":"7a42f590373603109bbaa1445beed47797fc6d610afee2f81806f536dbbedc45"} Dec 12 16:07:27 crc kubenswrapper[4693]: I1212 16:07:27.998163 4693 generic.go:334] "Generic (PLEG): container finished" podID="600ce031-56d1-4574-adb3-72be99bca3fa" containerID="ae4c88ea3c2b54692c55c84c36aabfddb855b362b7b13cfb22855abf03db9953" exitCode=0 Dec 12 16:07:27 crc kubenswrapper[4693]: I1212 16:07:27.998324 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd" event={"ID":"600ce031-56d1-4574-adb3-72be99bca3fa","Type":"ContainerDied","Data":"ae4c88ea3c2b54692c55c84c36aabfddb855b362b7b13cfb22855abf03db9953"} Dec 12 16:07:28 crc kubenswrapper[4693]: I1212 16:07:28.095221 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-7qqwq" Dec 12 16:07:28 crc kubenswrapper[4693]: I1212 16:07:28.130336 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" Dec 12 16:07:29 crc kubenswrapper[4693]: I1212 16:07:29.010526 4693 generic.go:334] "Generic (PLEG): container finished" podID="600ce031-56d1-4574-adb3-72be99bca3fa" containerID="fe046396d5914e1dbd4a8d2535809a8ff46204229e3c1bd0bc99cdae18ba428c" exitCode=0 Dec 12 16:07:29 crc kubenswrapper[4693]: I1212 16:07:29.010783 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd" event={"ID":"600ce031-56d1-4574-adb3-72be99bca3fa","Type":"ContainerDied","Data":"fe046396d5914e1dbd4a8d2535809a8ff46204229e3c1bd0bc99cdae18ba428c"} Dec 12 16:07:30 crc kubenswrapper[4693]: I1212 16:07:30.323960 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd" Dec 12 16:07:30 crc kubenswrapper[4693]: I1212 16:07:30.393450 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/600ce031-56d1-4574-adb3-72be99bca3fa-bundle\") pod \"600ce031-56d1-4574-adb3-72be99bca3fa\" (UID: \"600ce031-56d1-4574-adb3-72be99bca3fa\") " Dec 12 16:07:30 crc kubenswrapper[4693]: I1212 16:07:30.393543 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/600ce031-56d1-4574-adb3-72be99bca3fa-util\") pod \"600ce031-56d1-4574-adb3-72be99bca3fa\" (UID: \"600ce031-56d1-4574-adb3-72be99bca3fa\") " Dec 12 16:07:30 crc kubenswrapper[4693]: I1212 16:07:30.393661 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpwtz\" (UniqueName: \"kubernetes.io/projected/600ce031-56d1-4574-adb3-72be99bca3fa-kube-api-access-lpwtz\") pod \"600ce031-56d1-4574-adb3-72be99bca3fa\" (UID: \"600ce031-56d1-4574-adb3-72be99bca3fa\") " Dec 12 16:07:30 crc kubenswrapper[4693]: I1212 16:07:30.394362 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/600ce031-56d1-4574-adb3-72be99bca3fa-bundle" (OuterVolumeSpecName: "bundle") pod "600ce031-56d1-4574-adb3-72be99bca3fa" (UID: "600ce031-56d1-4574-adb3-72be99bca3fa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:07:30 crc kubenswrapper[4693]: I1212 16:07:30.395784 4693 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/600ce031-56d1-4574-adb3-72be99bca3fa-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:07:30 crc kubenswrapper[4693]: I1212 16:07:30.400726 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/600ce031-56d1-4574-adb3-72be99bca3fa-kube-api-access-lpwtz" (OuterVolumeSpecName: "kube-api-access-lpwtz") pod "600ce031-56d1-4574-adb3-72be99bca3fa" (UID: "600ce031-56d1-4574-adb3-72be99bca3fa"). InnerVolumeSpecName "kube-api-access-lpwtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:07:30 crc kubenswrapper[4693]: I1212 16:07:30.410072 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/600ce031-56d1-4574-adb3-72be99bca3fa-util" (OuterVolumeSpecName: "util") pod "600ce031-56d1-4574-adb3-72be99bca3fa" (UID: "600ce031-56d1-4574-adb3-72be99bca3fa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:07:30 crc kubenswrapper[4693]: I1212 16:07:30.497471 4693 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/600ce031-56d1-4574-adb3-72be99bca3fa-util\") on node \"crc\" DevicePath \"\"" Dec 12 16:07:30 crc kubenswrapper[4693]: I1212 16:07:30.497517 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpwtz\" (UniqueName: \"kubernetes.io/projected/600ce031-56d1-4574-adb3-72be99bca3fa-kube-api-access-lpwtz\") on node \"crc\" DevicePath \"\"" Dec 12 16:07:31 crc kubenswrapper[4693]: I1212 16:07:31.029222 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd" event={"ID":"600ce031-56d1-4574-adb3-72be99bca3fa","Type":"ContainerDied","Data":"8b04cefadf3e9e43816925af368128906d446ae338acd872ee4d686021b30867"} Dec 12 16:07:31 crc kubenswrapper[4693]: I1212 16:07:31.029259 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b04cefadf3e9e43816925af368128906d446ae338acd872ee4d686021b30867" Dec 12 16:07:31 crc kubenswrapper[4693]: I1212 16:07:31.029313 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e959a5b6d1f71dafa2e8920885188b070058565baf696ecc786702e8f3r8fsd" Dec 12 16:07:36 crc kubenswrapper[4693]: I1212 16:07:36.850642 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8cb8d8774-9lkdf"] Dec 12 16:07:36 crc kubenswrapper[4693]: E1212 16:07:36.853742 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="600ce031-56d1-4574-adb3-72be99bca3fa" containerName="util" Dec 12 16:07:36 crc kubenswrapper[4693]: I1212 16:07:36.853826 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="600ce031-56d1-4574-adb3-72be99bca3fa" containerName="util" Dec 12 16:07:36 crc kubenswrapper[4693]: E1212 16:07:36.853961 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="600ce031-56d1-4574-adb3-72be99bca3fa" containerName="pull" Dec 12 16:07:36 crc kubenswrapper[4693]: I1212 16:07:36.854018 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="600ce031-56d1-4574-adb3-72be99bca3fa" containerName="pull" Dec 12 16:07:36 crc kubenswrapper[4693]: E1212 16:07:36.854082 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="600ce031-56d1-4574-adb3-72be99bca3fa" containerName="extract" Dec 12 16:07:36 crc kubenswrapper[4693]: I1212 16:07:36.854133 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="600ce031-56d1-4574-adb3-72be99bca3fa" containerName="extract" Dec 12 16:07:36 crc kubenswrapper[4693]: I1212 16:07:36.854426 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="600ce031-56d1-4574-adb3-72be99bca3fa" containerName="extract" Dec 12 16:07:36 crc kubenswrapper[4693]: I1212 16:07:36.855474 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-8cb8d8774-9lkdf" Dec 12 16:07:36 crc kubenswrapper[4693]: I1212 16:07:36.857798 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-r5jhd" Dec 12 16:07:36 crc kubenswrapper[4693]: I1212 16:07:36.896328 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8cb8d8774-9lkdf"] Dec 12 16:07:36 crc kubenswrapper[4693]: I1212 16:07:36.918744 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk4nf\" (UniqueName: \"kubernetes.io/projected/1da03bb8-e1d0-4e14-9a78-c5bcca1a191f-kube-api-access-sk4nf\") pod \"openstack-operator-controller-operator-8cb8d8774-9lkdf\" (UID: \"1da03bb8-e1d0-4e14-9a78-c5bcca1a191f\") " pod="openstack-operators/openstack-operator-controller-operator-8cb8d8774-9lkdf" Dec 12 16:07:37 crc kubenswrapper[4693]: I1212 16:07:37.019798 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk4nf\" (UniqueName: \"kubernetes.io/projected/1da03bb8-e1d0-4e14-9a78-c5bcca1a191f-kube-api-access-sk4nf\") pod \"openstack-operator-controller-operator-8cb8d8774-9lkdf\" (UID: \"1da03bb8-e1d0-4e14-9a78-c5bcca1a191f\") " pod="openstack-operators/openstack-operator-controller-operator-8cb8d8774-9lkdf" Dec 12 16:07:37 crc kubenswrapper[4693]: I1212 16:07:37.041061 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk4nf\" (UniqueName: \"kubernetes.io/projected/1da03bb8-e1d0-4e14-9a78-c5bcca1a191f-kube-api-access-sk4nf\") pod \"openstack-operator-controller-operator-8cb8d8774-9lkdf\" (UID: \"1da03bb8-e1d0-4e14-9a78-c5bcca1a191f\") " pod="openstack-operators/openstack-operator-controller-operator-8cb8d8774-9lkdf" Dec 12 16:07:37 crc kubenswrapper[4693]: I1212 16:07:37.176992 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-8cb8d8774-9lkdf" Dec 12 16:07:37 crc kubenswrapper[4693]: I1212 16:07:37.745678 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8cb8d8774-9lkdf"] Dec 12 16:07:38 crc kubenswrapper[4693]: I1212 16:07:38.085601 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8cb8d8774-9lkdf" event={"ID":"1da03bb8-e1d0-4e14-9a78-c5bcca1a191f","Type":"ContainerStarted","Data":"d2fc7a8491d9df516550df2441fe3faf51111c6269dd2f3986d0b9b9b07ca06f"} Dec 12 16:07:43 crc kubenswrapper[4693]: I1212 16:07:43.137392 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8cb8d8774-9lkdf" event={"ID":"1da03bb8-e1d0-4e14-9a78-c5bcca1a191f","Type":"ContainerStarted","Data":"533860db69dab9537eb59d52345a42e464e11ce22ba7dd335d34e554c8e913c0"} Dec 12 16:07:43 crc kubenswrapper[4693]: I1212 16:07:43.138439 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-8cb8d8774-9lkdf" Dec 12 16:07:43 crc kubenswrapper[4693]: I1212 16:07:43.185135 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-8cb8d8774-9lkdf" podStartSLOduration=2.135481603 podStartE2EDuration="7.185081332s" podCreationTimestamp="2025-12-12 16:07:36 +0000 UTC" firstStartedPulling="2025-12-12 16:07:37.767296479 +0000 UTC m=+1284.935936080" lastFinishedPulling="2025-12-12 16:07:42.816896208 +0000 UTC m=+1289.985535809" observedRunningTime="2025-12-12 16:07:43.183481449 +0000 UTC m=+1290.352121060" watchObservedRunningTime="2025-12-12 16:07:43.185081332 +0000 UTC m=+1290.353720933" Dec 12 16:07:57 crc kubenswrapper[4693]: I1212 16:07:57.180154 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-8cb8d8774-9lkdf" Dec 12 16:08:15 crc kubenswrapper[4693]: I1212 16:08:15.834932 4693 scope.go:117] "RemoveContainer" containerID="494298c92a77df71d9a44e0599b427ea179ba2cfc4a307d41b74a19e7cd24be7" Dec 12 16:08:15 crc kubenswrapper[4693]: I1212 16:08:15.860166 4693 scope.go:117] "RemoveContainer" containerID="ca7222d66937027eadbb46663c1a419817f148dd66c7cb0b3893470ec6ff7225" Dec 12 16:08:15 crc kubenswrapper[4693]: I1212 16:08:15.879458 4693 scope.go:117] "RemoveContainer" containerID="7cb147a197135a354973a02a5a654bc4ce40e224c8543cf2edb60ce535b2c8d4" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.169887 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-tdrqj"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.171951 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tdrqj" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.194773 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-mh7tr" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.258235 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-4wp87"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.276876 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4wp87" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.281229 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-hdhgj" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.312789 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-tdrqj"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.332185 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m78r4\" (UniqueName: \"kubernetes.io/projected/4fa52597-7870-4902-a274-6a4103c3630b-kube-api-access-m78r4\") pod \"barbican-operator-controller-manager-7d9dfd778-tdrqj\" (UID: \"4fa52597-7870-4902-a274-6a4103c3630b\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tdrqj" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.341358 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-4wp87"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.388920 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-5w2f8"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.390647 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-5w2f8"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.390750 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-5w2f8" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.394894 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-65zx9" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.405807 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-q7mdw"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.407729 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-q7mdw" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.409773 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-4ltdp" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.433194 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhsbv\" (UniqueName: \"kubernetes.io/projected/ac211615-b518-4011-be82-483cbb246d4b-kube-api-access-hhsbv\") pod \"cinder-operator-controller-manager-6c677c69b-4wp87\" (UID: \"ac211615-b518-4011-be82-483cbb246d4b\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4wp87" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.433286 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m78r4\" (UniqueName: \"kubernetes.io/projected/4fa52597-7870-4902-a274-6a4103c3630b-kube-api-access-m78r4\") pod \"barbican-operator-controller-manager-7d9dfd778-tdrqj\" (UID: \"4fa52597-7870-4902-a274-6a4103c3630b\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tdrqj" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.446355 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nrfbt"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.447781 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nrfbt" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.450064 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-j9sgb" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.486173 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nrfbt"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.502725 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-q7mdw"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.529306 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-lz95h"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.530565 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-lz95h" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.532544 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m78r4\" (UniqueName: \"kubernetes.io/projected/4fa52597-7870-4902-a274-6a4103c3630b-kube-api-access-m78r4\") pod \"barbican-operator-controller-manager-7d9dfd778-tdrqj\" (UID: \"4fa52597-7870-4902-a274-6a4103c3630b\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tdrqj" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.532922 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tdrqj" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.534135 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhsbv\" (UniqueName: \"kubernetes.io/projected/ac211615-b518-4011-be82-483cbb246d4b-kube-api-access-hhsbv\") pod \"cinder-operator-controller-manager-6c677c69b-4wp87\" (UID: \"ac211615-b518-4011-be82-483cbb246d4b\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4wp87" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.534214 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl4j4\" (UniqueName: \"kubernetes.io/projected/0f300296-9b08-4fcc-9933-a752304b3188-kube-api-access-xl4j4\") pod \"designate-operator-controller-manager-697fb699cf-5w2f8\" (UID: \"0f300296-9b08-4fcc-9933-a752304b3188\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-5w2f8" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.534311 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxv7g\" (UniqueName: \"kubernetes.io/projected/fe8f2a92-e87a-40d4-b96b-0e0af6443656-kube-api-access-rxv7g\") pod \"heat-operator-controller-manager-5f64f6f8bb-nrfbt\" (UID: \"fe8f2a92-e87a-40d4-b96b-0e0af6443656\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nrfbt" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.534367 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj8hs\" (UniqueName: \"kubernetes.io/projected/f56863f1-3f85-4c6f-a2a6-81f0ee3b6317-kube-api-access-bj8hs\") pod \"glance-operator-controller-manager-5697bb5779-q7mdw\" (UID: \"f56863f1-3f85-4c6f-a2a6-81f0ee3b6317\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-q7mdw" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.546252 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-ph6xq" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.560601 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhsbv\" (UniqueName: \"kubernetes.io/projected/ac211615-b518-4011-be82-483cbb246d4b-kube-api-access-hhsbv\") pod \"cinder-operator-controller-manager-6c677c69b-4wp87\" (UID: \"ac211615-b518-4011-be82-483cbb246d4b\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4wp87" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.602806 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.607665 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4wp87" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.615865 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.615955 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-lz95h"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.627018 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ghw6j" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.627206 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.639093 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwfbl\" (UniqueName: \"kubernetes.io/projected/267498f5-fa7b-44ec-bd94-361a261e8844-kube-api-access-lwfbl\") pod \"horizon-operator-controller-manager-68c6d99b8f-lz95h\" (UID: \"267498f5-fa7b-44ec-bd94-361a261e8844\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-lz95h" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.639190 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl4j4\" (UniqueName: \"kubernetes.io/projected/0f300296-9b08-4fcc-9933-a752304b3188-kube-api-access-xl4j4\") pod \"designate-operator-controller-manager-697fb699cf-5w2f8\" (UID: \"0f300296-9b08-4fcc-9933-a752304b3188\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-5w2f8" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.639247 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxv7g\" (UniqueName: \"kubernetes.io/projected/fe8f2a92-e87a-40d4-b96b-0e0af6443656-kube-api-access-rxv7g\") pod \"heat-operator-controller-manager-5f64f6f8bb-nrfbt\" (UID: \"fe8f2a92-e87a-40d4-b96b-0e0af6443656\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nrfbt" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.639305 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj8hs\" (UniqueName: \"kubernetes.io/projected/f56863f1-3f85-4c6f-a2a6-81f0ee3b6317-kube-api-access-bj8hs\") pod \"glance-operator-controller-manager-5697bb5779-q7mdw\" (UID: \"f56863f1-3f85-4c6f-a2a6-81f0ee3b6317\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-q7mdw" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.647444 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.670180 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-cdd8s"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.672679 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-cdd8s" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.676935 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-pqdsx" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.688763 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj8hs\" (UniqueName: \"kubernetes.io/projected/f56863f1-3f85-4c6f-a2a6-81f0ee3b6317-kube-api-access-bj8hs\") pod \"glance-operator-controller-manager-5697bb5779-q7mdw\" (UID: \"f56863f1-3f85-4c6f-a2a6-81f0ee3b6317\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-q7mdw" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.700958 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl4j4\" (UniqueName: \"kubernetes.io/projected/0f300296-9b08-4fcc-9933-a752304b3188-kube-api-access-xl4j4\") pod \"designate-operator-controller-manager-697fb699cf-5w2f8\" (UID: \"0f300296-9b08-4fcc-9933-a752304b3188\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-5w2f8" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.702151 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxv7g\" (UniqueName: \"kubernetes.io/projected/fe8f2a92-e87a-40d4-b96b-0e0af6443656-kube-api-access-rxv7g\") pod \"heat-operator-controller-manager-5f64f6f8bb-nrfbt\" (UID: \"fe8f2a92-e87a-40d4-b96b-0e0af6443656\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nrfbt" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.706061 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-bkfnh"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.710659 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bkfnh" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.713213 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-5fbqs" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.720780 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-5w2f8" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.724633 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-cdd8s"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.733661 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-q7mdw" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.740896 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f9dd\" (UniqueName: \"kubernetes.io/projected/57409d4d-edf7-400c-9fcf-d6116ac22968-kube-api-access-6f9dd\") pod \"infra-operator-controller-manager-78d48bff9d-vbmgp\" (UID: \"57409d4d-edf7-400c-9fcf-d6116ac22968\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.740946 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hqjc\" (UniqueName: \"kubernetes.io/projected/8c0d2adb-6fec-4574-8733-b6e817a943e5-kube-api-access-9hqjc\") pod \"ironic-operator-controller-manager-967d97867-cdd8s\" (UID: \"8c0d2adb-6fec-4574-8733-b6e817a943e5\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-cdd8s" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.740983 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwfbl\" (UniqueName: \"kubernetes.io/projected/267498f5-fa7b-44ec-bd94-361a261e8844-kube-api-access-lwfbl\") pod \"horizon-operator-controller-manager-68c6d99b8f-lz95h\" (UID: \"267498f5-fa7b-44ec-bd94-361a261e8844\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-lz95h" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.741021 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57409d4d-edf7-400c-9fcf-d6116ac22968-cert\") pod \"infra-operator-controller-manager-78d48bff9d-vbmgp\" (UID: \"57409d4d-edf7-400c-9fcf-d6116ac22968\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.757882 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-bkfnh"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.771807 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwfbl\" (UniqueName: \"kubernetes.io/projected/267498f5-fa7b-44ec-bd94-361a261e8844-kube-api-access-lwfbl\") pod \"horizon-operator-controller-manager-68c6d99b8f-lz95h\" (UID: \"267498f5-fa7b-44ec-bd94-361a261e8844\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-lz95h" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.774231 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-lz95h" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.786305 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-pt4rg"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.787672 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-pt4rg" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.790798 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-jfpcj" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.792548 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nrfbt" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.822525 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-pt4rg"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.836218 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-cb6hq"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.837686 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-cb6hq"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.837769 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-cb6hq" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.842160 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9mxb\" (UniqueName: \"kubernetes.io/projected/4c46ca75-8071-4f2a-bda0-44bf851365cb-kube-api-access-b9mxb\") pod \"keystone-operator-controller-manager-7765d96ddf-bkfnh\" (UID: \"4c46ca75-8071-4f2a-bda0-44bf851365cb\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bkfnh" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.842226 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnv7b\" (UniqueName: \"kubernetes.io/projected/f73b5773-7bac-41ae-af91-0e504b5a234f-kube-api-access-dnv7b\") pod \"manila-operator-controller-manager-5b5fd79c9c-pt4rg\" (UID: \"f73b5773-7bac-41ae-af91-0e504b5a234f\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-pt4rg" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.842389 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f9dd\" (UniqueName: \"kubernetes.io/projected/57409d4d-edf7-400c-9fcf-d6116ac22968-kube-api-access-6f9dd\") pod \"infra-operator-controller-manager-78d48bff9d-vbmgp\" (UID: \"57409d4d-edf7-400c-9fcf-d6116ac22968\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.842437 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hqjc\" (UniqueName: \"kubernetes.io/projected/8c0d2adb-6fec-4574-8733-b6e817a943e5-kube-api-access-9hqjc\") pod \"ironic-operator-controller-manager-967d97867-cdd8s\" (UID: \"8c0d2adb-6fec-4574-8733-b6e817a943e5\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-cdd8s" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.842512 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57409d4d-edf7-400c-9fcf-d6116ac22968-cert\") pod \"infra-operator-controller-manager-78d48bff9d-vbmgp\" (UID: \"57409d4d-edf7-400c-9fcf-d6116ac22968\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" Dec 12 16:08:21 crc kubenswrapper[4693]: E1212 16:08:21.842675 4693 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 12 16:08:21 crc kubenswrapper[4693]: E1212 16:08:21.842724 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57409d4d-edf7-400c-9fcf-d6116ac22968-cert podName:57409d4d-edf7-400c-9fcf-d6116ac22968 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:22.342704308 +0000 UTC m=+1329.511343909 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/57409d4d-edf7-400c-9fcf-d6116ac22968-cert") pod "infra-operator-controller-manager-78d48bff9d-vbmgp" (UID: "57409d4d-edf7-400c-9fcf-d6116ac22968") : secret "infra-operator-webhook-server-cert" not found Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.849263 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-mlzvc" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.889611 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cg948"] Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.892458 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f9dd\" (UniqueName: \"kubernetes.io/projected/57409d4d-edf7-400c-9fcf-d6116ac22968-kube-api-access-6f9dd\") pod \"infra-operator-controller-manager-78d48bff9d-vbmgp\" (UID: \"57409d4d-edf7-400c-9fcf-d6116ac22968\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.899395 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cg948" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.916344 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hqjc\" (UniqueName: \"kubernetes.io/projected/8c0d2adb-6fec-4574-8733-b6e817a943e5-kube-api-access-9hqjc\") pod \"ironic-operator-controller-manager-967d97867-cdd8s\" (UID: \"8c0d2adb-6fec-4574-8733-b6e817a943e5\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-cdd8s" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.923946 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-4zjvk" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.943778 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9mxb\" (UniqueName: \"kubernetes.io/projected/4c46ca75-8071-4f2a-bda0-44bf851365cb-kube-api-access-b9mxb\") pod \"keystone-operator-controller-manager-7765d96ddf-bkfnh\" (UID: \"4c46ca75-8071-4f2a-bda0-44bf851365cb\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bkfnh" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.943839 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnv7b\" (UniqueName: \"kubernetes.io/projected/f73b5773-7bac-41ae-af91-0e504b5a234f-kube-api-access-dnv7b\") pod \"manila-operator-controller-manager-5b5fd79c9c-pt4rg\" (UID: \"f73b5773-7bac-41ae-af91-0e504b5a234f\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-pt4rg" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.943916 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lczpz\" (UniqueName: \"kubernetes.io/projected/f8bfea4b-063c-461e-9116-63d76fd06130-kube-api-access-lczpz\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-cg948\" (UID: \"f8bfea4b-063c-461e-9116-63d76fd06130\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cg948" Dec 12 16:08:21 crc kubenswrapper[4693]: I1212 16:08:21.944027 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn884\" (UniqueName: \"kubernetes.io/projected/b3dfd27f-9569-444d-a917-04c7f4c67ec9-kube-api-access-tn884\") pod \"mariadb-operator-controller-manager-79c8c4686c-cb6hq\" (UID: \"b3dfd27f-9569-444d-a917-04c7f4c67ec9\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-cb6hq" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.004211 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-w5zgk"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.008747 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnv7b\" (UniqueName: \"kubernetes.io/projected/f73b5773-7bac-41ae-af91-0e504b5a234f-kube-api-access-dnv7b\") pod \"manila-operator-controller-manager-5b5fd79c9c-pt4rg\" (UID: \"f73b5773-7bac-41ae-af91-0e504b5a234f\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-pt4rg" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.010064 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w5zgk" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.013905 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9mxb\" (UniqueName: \"kubernetes.io/projected/4c46ca75-8071-4f2a-bda0-44bf851365cb-kube-api-access-b9mxb\") pod \"keystone-operator-controller-manager-7765d96ddf-bkfnh\" (UID: \"4c46ca75-8071-4f2a-bda0-44bf851365cb\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bkfnh" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.017586 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-577xw" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.030883 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cg948"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.044992 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lczpz\" (UniqueName: \"kubernetes.io/projected/f8bfea4b-063c-461e-9116-63d76fd06130-kube-api-access-lczpz\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-cg948\" (UID: \"f8bfea4b-063c-461e-9116-63d76fd06130\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cg948" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.045081 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn884\" (UniqueName: \"kubernetes.io/projected/b3dfd27f-9569-444d-a917-04c7f4c67ec9-kube-api-access-tn884\") pod \"mariadb-operator-controller-manager-79c8c4686c-cb6hq\" (UID: \"b3dfd27f-9569-444d-a917-04c7f4c67ec9\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-cb6hq" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.045141 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s684k\" (UniqueName: \"kubernetes.io/projected/fc9969a7-d068-499a-90a4-571822a60c5b-kube-api-access-s684k\") pod \"nova-operator-controller-manager-697bc559fc-w5zgk\" (UID: \"fc9969a7-d068-499a-90a4-571822a60c5b\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w5zgk" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.050890 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-9jcss"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.085367 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9jcss" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.087257 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn884\" (UniqueName: \"kubernetes.io/projected/b3dfd27f-9569-444d-a917-04c7f4c67ec9-kube-api-access-tn884\") pod \"mariadb-operator-controller-manager-79c8c4686c-cb6hq\" (UID: \"b3dfd27f-9569-444d-a917-04c7f4c67ec9\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-cb6hq" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.088123 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lczpz\" (UniqueName: \"kubernetes.io/projected/f8bfea4b-063c-461e-9116-63d76fd06130-kube-api-access-lczpz\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-cg948\" (UID: \"f8bfea4b-063c-461e-9116-63d76fd06130\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cg948" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.088497 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-27h99" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.115167 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-w5zgk"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.143185 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-cdd8s" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.151458 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4575\" (UniqueName: \"kubernetes.io/projected/a41df83d-6bb2-4c49-a431-f5851036a44d-kube-api-access-w4575\") pod \"octavia-operator-controller-manager-998648c74-9jcss\" (UID: \"a41df83d-6bb2-4c49-a431-f5851036a44d\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-9jcss" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.151584 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s684k\" (UniqueName: \"kubernetes.io/projected/fc9969a7-d068-499a-90a4-571822a60c5b-kube-api-access-s684k\") pod \"nova-operator-controller-manager-697bc559fc-w5zgk\" (UID: \"fc9969a7-d068-499a-90a4-571822a60c5b\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w5zgk" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.163006 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-9jcss"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.173166 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s684k\" (UniqueName: \"kubernetes.io/projected/fc9969a7-d068-499a-90a4-571822a60c5b-kube-api-access-s684k\") pod \"nova-operator-controller-manager-697bc559fc-w5zgk\" (UID: \"fc9969a7-d068-499a-90a4-571822a60c5b\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w5zgk" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.181579 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bkfnh" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.203510 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.205376 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.222300 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-7sjmp" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.253755 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hsfq\" (UniqueName: \"kubernetes.io/projected/96adb3fc-0bd6-44a0-9a3a-3bae3aa3a30c-kube-api-access-7hsfq\") pod \"ovn-operator-controller-manager-b6456fdb6-v852d\" (UID: \"96adb3fc-0bd6-44a0-9a3a-3bae3aa3a30c\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.253869 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4575\" (UniqueName: \"kubernetes.io/projected/a41df83d-6bb2-4c49-a431-f5851036a44d-kube-api-access-w4575\") pod \"octavia-operator-controller-manager-998648c74-9jcss\" (UID: \"a41df83d-6bb2-4c49-a431-f5851036a44d\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-9jcss" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.258697 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.267232 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.268881 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.276406 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-4dnpb"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.276905 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-pt4rg" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.278558 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-4dnpb" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.296621 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zvl74" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.296671 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.297482 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-k6gjd" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.298767 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-wsr9h"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.300688 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-wsr9h" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.305348 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-cb6hq" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.310122 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-qbqr2" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.318088 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-wsr9h"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.332381 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4575\" (UniqueName: \"kubernetes.io/projected/a41df83d-6bb2-4c49-a431-f5851036a44d-kube-api-access-w4575\") pod \"octavia-operator-controller-manager-998648c74-9jcss\" (UID: \"a41df83d-6bb2-4c49-a431-f5851036a44d\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-9jcss" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.333447 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-4dnpb"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.336771 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cg948" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.354173 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.355919 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.356319 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w49kr\" (UniqueName: \"kubernetes.io/projected/847f97b7-84be-4d2a-a699-30ca49fd1023-kube-api-access-w49kr\") pod \"swift-operator-controller-manager-9d58d64bc-wsr9h\" (UID: \"847f97b7-84be-4d2a-a699-30ca49fd1023\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-wsr9h" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.356364 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57409d4d-edf7-400c-9fcf-d6116ac22968-cert\") pod \"infra-operator-controller-manager-78d48bff9d-vbmgp\" (UID: \"57409d4d-edf7-400c-9fcf-d6116ac22968\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.356439 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fczg\" (UniqueName: \"kubernetes.io/projected/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-kube-api-access-9fczg\") pod \"openstack-baremetal-operator-controller-manager-84b575879fmkmwl\" (UID: \"51d89b29-7872-4e9d-9fdd-b1fdd7de6de3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.356468 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrmzl\" (UniqueName: \"kubernetes.io/projected/463ba770-ab51-4445-8f63-bd5615ddb865-kube-api-access-wrmzl\") pod \"placement-operator-controller-manager-78f8948974-4dnpb\" (UID: \"463ba770-ab51-4445-8f63-bd5615ddb865\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-4dnpb" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.356497 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fmkmwl\" (UID: \"51d89b29-7872-4e9d-9fdd-b1fdd7de6de3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.356535 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hsfq\" (UniqueName: \"kubernetes.io/projected/96adb3fc-0bd6-44a0-9a3a-3bae3aa3a30c-kube-api-access-7hsfq\") pod \"ovn-operator-controller-manager-b6456fdb6-v852d\" (UID: \"96adb3fc-0bd6-44a0-9a3a-3bae3aa3a30c\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d" Dec 12 16:08:22 crc kubenswrapper[4693]: E1212 16:08:22.356865 4693 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 12 16:08:22 crc kubenswrapper[4693]: E1212 16:08:22.356905 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57409d4d-edf7-400c-9fcf-d6116ac22968-cert podName:57409d4d-edf7-400c-9fcf-d6116ac22968 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:23.356889865 +0000 UTC m=+1330.525529456 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/57409d4d-edf7-400c-9fcf-d6116ac22968-cert") pod "infra-operator-controller-manager-78d48bff9d-vbmgp" (UID: "57409d4d-edf7-400c-9fcf-d6116ac22968") : secret "infra-operator-webhook-server-cert" not found Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.359594 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-5dqmw" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.366017 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.376069 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.404191 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-b86kc"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.406549 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.415544 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w5zgk" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.417157 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-lwjqb" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.425546 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hsfq\" (UniqueName: \"kubernetes.io/projected/96adb3fc-0bd6-44a0-9a3a-3bae3aa3a30c-kube-api-access-7hsfq\") pod \"ovn-operator-controller-manager-b6456fdb6-v852d\" (UID: \"96adb3fc-0bd6-44a0-9a3a-3bae3aa3a30c\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.434704 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-b86kc"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.458008 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w49kr\" (UniqueName: \"kubernetes.io/projected/847f97b7-84be-4d2a-a699-30ca49fd1023-kube-api-access-w49kr\") pod \"swift-operator-controller-manager-9d58d64bc-wsr9h\" (UID: \"847f97b7-84be-4d2a-a699-30ca49fd1023\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-wsr9h" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.458536 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fczg\" (UniqueName: \"kubernetes.io/projected/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-kube-api-access-9fczg\") pod \"openstack-baremetal-operator-controller-manager-84b575879fmkmwl\" (UID: \"51d89b29-7872-4e9d-9fdd-b1fdd7de6de3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.458665 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrmzl\" (UniqueName: \"kubernetes.io/projected/463ba770-ab51-4445-8f63-bd5615ddb865-kube-api-access-wrmzl\") pod \"placement-operator-controller-manager-78f8948974-4dnpb\" (UID: \"463ba770-ab51-4445-8f63-bd5615ddb865\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-4dnpb" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.458778 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fmkmwl\" (UID: \"51d89b29-7872-4e9d-9fdd-b1fdd7de6de3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.458936 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c97jr\" (UniqueName: \"kubernetes.io/projected/26537316-7b55-48dc-b952-bc2220120194-kube-api-access-c97jr\") pod \"telemetry-operator-controller-manager-6676c589bf-7kphf\" (UID: \"26537316-7b55-48dc-b952-bc2220120194\") " pod="openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.459073 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f8xk\" (UniqueName: \"kubernetes.io/projected/5616685d-71d7-49b9-8c1b-6eccc11a74a1-kube-api-access-6f8xk\") pod \"test-operator-controller-manager-5854674fcc-b86kc\" (UID: \"5616685d-71d7-49b9-8c1b-6eccc11a74a1\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" Dec 12 16:08:22 crc kubenswrapper[4693]: E1212 16:08:22.459962 4693 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 16:08:22 crc kubenswrapper[4693]: E1212 16:08:22.460100 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-cert podName:51d89b29-7872-4e9d-9fdd-b1fdd7de6de3 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:22.960082568 +0000 UTC m=+1330.128722169 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fmkmwl" (UID: "51d89b29-7872-4e9d-9fdd-b1fdd7de6de3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.460436 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-clqpc"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.461600 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9jcss" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.462789 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-clqpc" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.480466 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-4dxsp" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.490739 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-clqpc"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.503887 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fczg\" (UniqueName: \"kubernetes.io/projected/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-kube-api-access-9fczg\") pod \"openstack-baremetal-operator-controller-manager-84b575879fmkmwl\" (UID: \"51d89b29-7872-4e9d-9fdd-b1fdd7de6de3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.506592 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w49kr\" (UniqueName: \"kubernetes.io/projected/847f97b7-84be-4d2a-a699-30ca49fd1023-kube-api-access-w49kr\") pod \"swift-operator-controller-manager-9d58d64bc-wsr9h\" (UID: \"847f97b7-84be-4d2a-a699-30ca49fd1023\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-wsr9h" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.521762 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrmzl\" (UniqueName: \"kubernetes.io/projected/463ba770-ab51-4445-8f63-bd5615ddb865-kube-api-access-wrmzl\") pod \"placement-operator-controller-manager-78f8948974-4dnpb\" (UID: \"463ba770-ab51-4445-8f63-bd5615ddb865\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-4dnpb" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.531716 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.533117 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.535523 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.535594 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-zm4fx" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.535697 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.535959 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.564698 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l52xm\" (UniqueName: \"kubernetes.io/projected/fc2504e1-7808-49ef-9df0-2fda81f786f6-kube-api-access-l52xm\") pod \"watcher-operator-controller-manager-75944c9b7-clqpc\" (UID: \"fc2504e1-7808-49ef-9df0-2fda81f786f6\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-clqpc" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.564768 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwmrc\" (UniqueName: \"kubernetes.io/projected/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-kube-api-access-bwmrc\") pod \"openstack-operator-controller-manager-565558f958-fnjh4\" (UID: \"ffd9365b-fb4c-4a2b-a168-fcf9cce89228\") " pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.564838 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-metrics-certs\") pod \"openstack-operator-controller-manager-565558f958-fnjh4\" (UID: \"ffd9365b-fb4c-4a2b-a168-fcf9cce89228\") " pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.564916 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c97jr\" (UniqueName: \"kubernetes.io/projected/26537316-7b55-48dc-b952-bc2220120194-kube-api-access-c97jr\") pod \"telemetry-operator-controller-manager-6676c589bf-7kphf\" (UID: \"26537316-7b55-48dc-b952-bc2220120194\") " pod="openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.564953 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-webhook-certs\") pod \"openstack-operator-controller-manager-565558f958-fnjh4\" (UID: \"ffd9365b-fb4c-4a2b-a168-fcf9cce89228\") " pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.564987 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f8xk\" (UniqueName: \"kubernetes.io/projected/5616685d-71d7-49b9-8c1b-6eccc11a74a1-kube-api-access-6f8xk\") pod \"test-operator-controller-manager-5854674fcc-b86kc\" (UID: \"5616685d-71d7-49b9-8c1b-6eccc11a74a1\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.565514 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.610246 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c97jr\" (UniqueName: \"kubernetes.io/projected/26537316-7b55-48dc-b952-bc2220120194-kube-api-access-c97jr\") pod \"telemetry-operator-controller-manager-6676c589bf-7kphf\" (UID: \"26537316-7b55-48dc-b952-bc2220120194\") " pod="openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.610317 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f8xk\" (UniqueName: \"kubernetes.io/projected/5616685d-71d7-49b9-8c1b-6eccc11a74a1-kube-api-access-6f8xk\") pod \"test-operator-controller-manager-5854674fcc-b86kc\" (UID: \"5616685d-71d7-49b9-8c1b-6eccc11a74a1\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.644675 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9ngm"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.646305 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9ngm" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.648597 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-nzk4w" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.655521 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9ngm"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.666065 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-webhook-certs\") pod \"openstack-operator-controller-manager-565558f958-fnjh4\" (UID: \"ffd9365b-fb4c-4a2b-a168-fcf9cce89228\") " pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.666170 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l52xm\" (UniqueName: \"kubernetes.io/projected/fc2504e1-7808-49ef-9df0-2fda81f786f6-kube-api-access-l52xm\") pod \"watcher-operator-controller-manager-75944c9b7-clqpc\" (UID: \"fc2504e1-7808-49ef-9df0-2fda81f786f6\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-clqpc" Dec 12 16:08:22 crc kubenswrapper[4693]: E1212 16:08:22.666191 4693 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 12 16:08:22 crc kubenswrapper[4693]: E1212 16:08:22.666288 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-webhook-certs podName:ffd9365b-fb4c-4a2b-a168-fcf9cce89228 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:23.166249238 +0000 UTC m=+1330.334888899 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-webhook-certs") pod "openstack-operator-controller-manager-565558f958-fnjh4" (UID: "ffd9365b-fb4c-4a2b-a168-fcf9cce89228") : secret "webhook-server-cert" not found Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.666201 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwmrc\" (UniqueName: \"kubernetes.io/projected/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-kube-api-access-bwmrc\") pod \"openstack-operator-controller-manager-565558f958-fnjh4\" (UID: \"ffd9365b-fb4c-4a2b-a168-fcf9cce89228\") " pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.666571 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-metrics-certs\") pod \"openstack-operator-controller-manager-565558f958-fnjh4\" (UID: \"ffd9365b-fb4c-4a2b-a168-fcf9cce89228\") " pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:22 crc kubenswrapper[4693]: E1212 16:08:22.666776 4693 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 12 16:08:22 crc kubenswrapper[4693]: E1212 16:08:22.666823 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-metrics-certs podName:ffd9365b-fb4c-4a2b-a168-fcf9cce89228 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:23.166810833 +0000 UTC m=+1330.335450434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-metrics-certs") pod "openstack-operator-controller-manager-565558f958-fnjh4" (UID: "ffd9365b-fb4c-4a2b-a168-fcf9cce89228") : secret "metrics-server-cert" not found Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.691000 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwmrc\" (UniqueName: \"kubernetes.io/projected/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-kube-api-access-bwmrc\") pod \"openstack-operator-controller-manager-565558f958-fnjh4\" (UID: \"ffd9365b-fb4c-4a2b-a168-fcf9cce89228\") " pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.694004 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l52xm\" (UniqueName: \"kubernetes.io/projected/fc2504e1-7808-49ef-9df0-2fda81f786f6-kube-api-access-l52xm\") pod \"watcher-operator-controller-manager-75944c9b7-clqpc\" (UID: \"fc2504e1-7808-49ef-9df0-2fda81f786f6\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-clqpc" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.704648 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-4dnpb" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.737238 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-wsr9h" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.769691 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcbtx\" (UniqueName: \"kubernetes.io/projected/e1146df6-70f0-4074-9f67-668bdb272e1a-kube-api-access-pcbtx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-c9ngm\" (UID: \"e1146df6-70f0-4074-9f67-668bdb272e1a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9ngm" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.785569 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.837439 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.869011 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-4wp87"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.875737 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcbtx\" (UniqueName: \"kubernetes.io/projected/e1146df6-70f0-4074-9f67-668bdb272e1a-kube-api-access-pcbtx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-c9ngm\" (UID: \"e1146df6-70f0-4074-9f67-668bdb272e1a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9ngm" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.904362 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcbtx\" (UniqueName: \"kubernetes.io/projected/e1146df6-70f0-4074-9f67-668bdb272e1a-kube-api-access-pcbtx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-c9ngm\" (UID: \"e1146df6-70f0-4074-9f67-668bdb272e1a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9ngm" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.909475 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-clqpc" Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.909946 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-5w2f8"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.960395 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-tdrqj"] Dec 12 16:08:22 crc kubenswrapper[4693]: I1212 16:08:22.979975 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fmkmwl\" (UID: \"51d89b29-7872-4e9d-9fdd-b1fdd7de6de3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" Dec 12 16:08:22 crc kubenswrapper[4693]: E1212 16:08:22.980140 4693 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 16:08:22 crc kubenswrapper[4693]: E1212 16:08:22.980224 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-cert podName:51d89b29-7872-4e9d-9fdd-b1fdd7de6de3 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:23.980192364 +0000 UTC m=+1331.148831965 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fmkmwl" (UID: "51d89b29-7872-4e9d-9fdd-b1fdd7de6de3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 16:08:23 crc kubenswrapper[4693]: I1212 16:08:23.084186 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9ngm" Dec 12 16:08:23 crc kubenswrapper[4693]: I1212 16:08:23.191020 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-metrics-certs\") pod \"openstack-operator-controller-manager-565558f958-fnjh4\" (UID: \"ffd9365b-fb4c-4a2b-a168-fcf9cce89228\") " pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:23 crc kubenswrapper[4693]: I1212 16:08:23.191531 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-webhook-certs\") pod \"openstack-operator-controller-manager-565558f958-fnjh4\" (UID: \"ffd9365b-fb4c-4a2b-a168-fcf9cce89228\") " pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:23 crc kubenswrapper[4693]: E1212 16:08:23.191762 4693 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 12 16:08:23 crc kubenswrapper[4693]: E1212 16:08:23.191832 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-webhook-certs podName:ffd9365b-fb4c-4a2b-a168-fcf9cce89228 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:24.19181115 +0000 UTC m=+1331.360450751 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-webhook-certs") pod "openstack-operator-controller-manager-565558f958-fnjh4" (UID: "ffd9365b-fb4c-4a2b-a168-fcf9cce89228") : secret "webhook-server-cert" not found Dec 12 16:08:23 crc kubenswrapper[4693]: E1212 16:08:23.192405 4693 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 12 16:08:23 crc kubenswrapper[4693]: E1212 16:08:23.192434 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-metrics-certs podName:ffd9365b-fb4c-4a2b-a168-fcf9cce89228 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:24.192425297 +0000 UTC m=+1331.361064898 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-metrics-certs") pod "openstack-operator-controller-manager-565558f958-fnjh4" (UID: "ffd9365b-fb4c-4a2b-a168-fcf9cce89228") : secret "metrics-server-cert" not found Dec 12 16:08:23 crc kubenswrapper[4693]: I1212 16:08:23.397472 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57409d4d-edf7-400c-9fcf-d6116ac22968-cert\") pod \"infra-operator-controller-manager-78d48bff9d-vbmgp\" (UID: \"57409d4d-edf7-400c-9fcf-d6116ac22968\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" Dec 12 16:08:23 crc kubenswrapper[4693]: E1212 16:08:23.398309 4693 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 12 16:08:23 crc kubenswrapper[4693]: E1212 16:08:23.398381 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57409d4d-edf7-400c-9fcf-d6116ac22968-cert podName:57409d4d-edf7-400c-9fcf-d6116ac22968 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:25.39834599 +0000 UTC m=+1332.566985591 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/57409d4d-edf7-400c-9fcf-d6116ac22968-cert") pod "infra-operator-controller-manager-78d48bff9d-vbmgp" (UID: "57409d4d-edf7-400c-9fcf-d6116ac22968") : secret "infra-operator-webhook-server-cert" not found Dec 12 16:08:23 crc kubenswrapper[4693]: I1212 16:08:23.453019 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-lz95h"] Dec 12 16:08:23 crc kubenswrapper[4693]: I1212 16:08:23.453058 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-q7mdw"] Dec 12 16:08:23 crc kubenswrapper[4693]: I1212 16:08:23.532762 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-lz95h" event={"ID":"267498f5-fa7b-44ec-bd94-361a261e8844","Type":"ContainerStarted","Data":"2c00dd651e32d402762b98c7afa0ff38793b442e8c25acd258c8ca7ff65d3d66"} Dec 12 16:08:23 crc kubenswrapper[4693]: I1212 16:08:23.536134 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tdrqj" event={"ID":"4fa52597-7870-4902-a274-6a4103c3630b","Type":"ContainerStarted","Data":"56f91a99ac8355baa96709799ba6ef32db1ca07b5210b2472bc3b6f8abead4c4"} Dec 12 16:08:23 crc kubenswrapper[4693]: I1212 16:08:23.537150 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4wp87" event={"ID":"ac211615-b518-4011-be82-483cbb246d4b","Type":"ContainerStarted","Data":"814d668a810555864102d7f6c8a78d9d1d988321ca76433ae4636397c0cc3947"} Dec 12 16:08:23 crc kubenswrapper[4693]: I1212 16:08:23.538000 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-q7mdw" event={"ID":"f56863f1-3f85-4c6f-a2a6-81f0ee3b6317","Type":"ContainerStarted","Data":"299bb41f128e36a400d960e0254717f10d991726c69acb77005cc96738bc46f5"} Dec 12 16:08:23 crc kubenswrapper[4693]: I1212 16:08:23.539207 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-5w2f8" event={"ID":"0f300296-9b08-4fcc-9933-a752304b3188","Type":"ContainerStarted","Data":"840607f3174689f1332f76117308963a708b9fe890af31534e3af6f92e4ecb00"} Dec 12 16:08:23 crc kubenswrapper[4693]: I1212 16:08:23.793804 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-pt4rg"] Dec 12 16:08:23 crc kubenswrapper[4693]: I1212 16:08:23.849165 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cg948"] Dec 12 16:08:23 crc kubenswrapper[4693]: W1212 16:08:23.869451 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8bfea4b_063c_461e_9116_63d76fd06130.slice/crio-0410788d7e50c2ed5cf7516c6ea12ff77dc164f54a151acc54170cab0ad63835 WatchSource:0}: Error finding container 0410788d7e50c2ed5cf7516c6ea12ff77dc164f54a151acc54170cab0ad63835: Status 404 returned error can't find the container with id 0410788d7e50c2ed5cf7516c6ea12ff77dc164f54a151acc54170cab0ad63835 Dec 12 16:08:23 crc kubenswrapper[4693]: I1212 16:08:23.871993 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-bkfnh"] Dec 12 16:08:23 crc kubenswrapper[4693]: I1212 16:08:23.907385 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nrfbt"] Dec 12 16:08:23 crc kubenswrapper[4693]: I1212 16:08:23.953311 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-cdd8s"] Dec 12 16:08:23 crc kubenswrapper[4693]: I1212 16:08:23.988496 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-w5zgk"] Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.000350 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-9jcss"] Dec 12 16:08:24 crc kubenswrapper[4693]: W1212 16:08:24.008969 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc9969a7_d068_499a_90a4_571822a60c5b.slice/crio-8c9dde06a36fa2e1389207cd3d64161825aff7fcbc8ffbecaee0eee92ea58fd7 WatchSource:0}: Error finding container 8c9dde06a36fa2e1389207cd3d64161825aff7fcbc8ffbecaee0eee92ea58fd7: Status 404 returned error can't find the container with id 8c9dde06a36fa2e1389207cd3d64161825aff7fcbc8ffbecaee0eee92ea58fd7 Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.012240 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-cb6hq"] Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.026176 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fmkmwl\" (UID: \"51d89b29-7872-4e9d-9fdd-b1fdd7de6de3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" Dec 12 16:08:24 crc kubenswrapper[4693]: E1212 16:08:24.026381 4693 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 16:08:24 crc kubenswrapper[4693]: E1212 16:08:24.027227 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-cert podName:51d89b29-7872-4e9d-9fdd-b1fdd7de6de3 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:26.027188247 +0000 UTC m=+1333.195827848 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fmkmwl" (UID: "51d89b29-7872-4e9d-9fdd-b1fdd7de6de3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.233986 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-webhook-certs\") pod \"openstack-operator-controller-manager-565558f958-fnjh4\" (UID: \"ffd9365b-fb4c-4a2b-a168-fcf9cce89228\") " pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.234124 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-metrics-certs\") pod \"openstack-operator-controller-manager-565558f958-fnjh4\" (UID: \"ffd9365b-fb4c-4a2b-a168-fcf9cce89228\") " pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:24 crc kubenswrapper[4693]: E1212 16:08:24.234186 4693 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 12 16:08:24 crc kubenswrapper[4693]: E1212 16:08:24.234252 4693 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 12 16:08:24 crc kubenswrapper[4693]: E1212 16:08:24.234280 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-webhook-certs podName:ffd9365b-fb4c-4a2b-a168-fcf9cce89228 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:26.234247581 +0000 UTC m=+1333.402887182 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-webhook-certs") pod "openstack-operator-controller-manager-565558f958-fnjh4" (UID: "ffd9365b-fb4c-4a2b-a168-fcf9cce89228") : secret "webhook-server-cert" not found Dec 12 16:08:24 crc kubenswrapper[4693]: E1212 16:08:24.234318 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-metrics-certs podName:ffd9365b-fb4c-4a2b-a168-fcf9cce89228 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:26.234302533 +0000 UTC m=+1333.402942224 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-metrics-certs") pod "openstack-operator-controller-manager-565558f958-fnjh4" (UID: "ffd9365b-fb4c-4a2b-a168-fcf9cce89228") : secret "metrics-server-cert" not found Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.467948 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-wsr9h"] Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.483936 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-4dnpb"] Dec 12 16:08:24 crc kubenswrapper[4693]: W1212 16:08:24.489128 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod847f97b7_84be_4d2a_a699_30ca49fd1023.slice/crio-1deeecef63b109764a3f49d5a06ccbe88e604d7b04284877f1e4509f33e78879 WatchSource:0}: Error finding container 1deeecef63b109764a3f49d5a06ccbe88e604d7b04284877f1e4509f33e78879: Status 404 returned error can't find the container with id 1deeecef63b109764a3f49d5a06ccbe88e604d7b04284877f1e4509f33e78879 Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.523232 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf"] Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.543675 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d"] Dec 12 16:08:24 crc kubenswrapper[4693]: E1212 16:08:24.554852 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.188:5001/openstack-k8s-operators/telemetry-operator:e56ff59cddfcaaf1ce66e7783ae9c0344c631735,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c97jr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6676c589bf-7kphf_openstack-operators(26537316-7b55-48dc-b952-bc2220120194): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 12 16:08:24 crc kubenswrapper[4693]: E1212 16:08:24.560233 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c97jr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6676c589bf-7kphf_openstack-operators(26537316-7b55-48dc-b952-bc2220120194): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.560303 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-b86kc"] Dec 12 16:08:24 crc kubenswrapper[4693]: E1212 16:08:24.563658 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf" podUID="26537316-7b55-48dc-b952-bc2220120194" Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.570284 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" event={"ID":"5616685d-71d7-49b9-8c1b-6eccc11a74a1","Type":"ContainerStarted","Data":"94b418f5647c9f78dc53a671474d72b0b08a0a40d81a04a93437747662cd686f"} Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.571767 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-4dnpb" event={"ID":"463ba770-ab51-4445-8f63-bd5615ddb865","Type":"ContainerStarted","Data":"79e2ae89a1ba2449ed2585cfe1027c3c3b1710468f8b2b92e96244a4524ee4f0"} Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.575495 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9ngm"] Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.577878 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9jcss" event={"ID":"a41df83d-6bb2-4c49-a431-f5851036a44d","Type":"ContainerStarted","Data":"243337e2344cf9e57effb736228ed862fe16f5e6737aacb121103e3f96ecffa2"} Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.590320 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-cb6hq" event={"ID":"b3dfd27f-9569-444d-a917-04c7f4c67ec9","Type":"ContainerStarted","Data":"1a86ff36427b6e4397d1b818766a7f6471418c6960744a4ecc2aa4225dc384c7"} Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.598039 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-clqpc"] Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.605377 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf" event={"ID":"26537316-7b55-48dc-b952-bc2220120194","Type":"ContainerStarted","Data":"af23a0c6187ab26ffe6e4b0e4c2706a6685471162733bd4408293e92b6bb0b82"} Dec 12 16:08:24 crc kubenswrapper[4693]: W1212 16:08:24.606107 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc2504e1_7808_49ef_9df0_2fda81f786f6.slice/crio-54b98c6dbd25497c4048c04e84500e34b4740053b2337dc6626729808844a89e WatchSource:0}: Error finding container 54b98c6dbd25497c4048c04e84500e34b4740053b2337dc6626729808844a89e: Status 404 returned error can't find the container with id 54b98c6dbd25497c4048c04e84500e34b4740053b2337dc6626729808844a89e Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.607816 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d" event={"ID":"96adb3fc-0bd6-44a0-9a3a-3bae3aa3a30c","Type":"ContainerStarted","Data":"adce83e7a04c48637847df4cdb9ec26ef49a38ad1c199be9e059c6820d8631a7"} Dec 12 16:08:24 crc kubenswrapper[4693]: E1212 16:08:24.607867 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l52xm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75944c9b7-clqpc_openstack-operators(fc2504e1-7808-49ef-9df0-2fda81f786f6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 12 16:08:24 crc kubenswrapper[4693]: E1212 16:08:24.607934 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.188:5001/openstack-k8s-operators/telemetry-operator:e56ff59cddfcaaf1ce66e7783ae9c0344c631735\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf" podUID="26537316-7b55-48dc-b952-bc2220120194" Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.609295 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bkfnh" event={"ID":"4c46ca75-8071-4f2a-bda0-44bf851365cb","Type":"ContainerStarted","Data":"1ae58f3ed4ec9521c02ce225b828192195f392fac738210b90cbfbb90f476125"} Dec 12 16:08:24 crc kubenswrapper[4693]: E1212 16:08:24.609416 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l52xm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75944c9b7-clqpc_openstack-operators(fc2504e1-7808-49ef-9df0-2fda81f786f6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 12 16:08:24 crc kubenswrapper[4693]: E1212 16:08:24.610701 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-clqpc" podUID="fc2504e1-7808-49ef-9df0-2fda81f786f6" Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.611018 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9ngm" event={"ID":"e1146df6-70f0-4074-9f67-668bdb272e1a","Type":"ContainerStarted","Data":"eaa28d963c7004afe8c1902aa2c0988e7e0910680da14c0ac820df34f1cca74e"} Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.612756 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nrfbt" event={"ID":"fe8f2a92-e87a-40d4-b96b-0e0af6443656","Type":"ContainerStarted","Data":"6199edaa2c7f7e8300fa517662d54a7b14339977908b83ae40f945bd406bbd81"} Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.615502 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w5zgk" event={"ID":"fc9969a7-d068-499a-90a4-571822a60c5b","Type":"ContainerStarted","Data":"8c9dde06a36fa2e1389207cd3d64161825aff7fcbc8ffbecaee0eee92ea58fd7"} Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.616808 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-wsr9h" event={"ID":"847f97b7-84be-4d2a-a699-30ca49fd1023","Type":"ContainerStarted","Data":"1deeecef63b109764a3f49d5a06ccbe88e604d7b04284877f1e4509f33e78879"} Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.618427 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cg948" event={"ID":"f8bfea4b-063c-461e-9116-63d76fd06130","Type":"ContainerStarted","Data":"0410788d7e50c2ed5cf7516c6ea12ff77dc164f54a151acc54170cab0ad63835"} Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.619725 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-cdd8s" event={"ID":"8c0d2adb-6fec-4574-8733-b6e817a943e5","Type":"ContainerStarted","Data":"62bc737b61f49ecb9ec4d0627c3f819aa835acd1a3989c2ad13633f6f510a48e"} Dec 12 16:08:24 crc kubenswrapper[4693]: I1212 16:08:24.626069 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-pt4rg" event={"ID":"f73b5773-7bac-41ae-af91-0e504b5a234f","Type":"ContainerStarted","Data":"04e8ae9e7ca967c48a1f6cd52bdce995e06c2d032263527658835b4a3a88d5eb"} Dec 12 16:08:25 crc kubenswrapper[4693]: I1212 16:08:25.460058 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57409d4d-edf7-400c-9fcf-d6116ac22968-cert\") pod \"infra-operator-controller-manager-78d48bff9d-vbmgp\" (UID: \"57409d4d-edf7-400c-9fcf-d6116ac22968\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" Dec 12 16:08:25 crc kubenswrapper[4693]: E1212 16:08:25.460434 4693 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 12 16:08:25 crc kubenswrapper[4693]: E1212 16:08:25.460564 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57409d4d-edf7-400c-9fcf-d6116ac22968-cert podName:57409d4d-edf7-400c-9fcf-d6116ac22968 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:29.460528083 +0000 UTC m=+1336.629167674 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/57409d4d-edf7-400c-9fcf-d6116ac22968-cert") pod "infra-operator-controller-manager-78d48bff9d-vbmgp" (UID: "57409d4d-edf7-400c-9fcf-d6116ac22968") : secret "infra-operator-webhook-server-cert" not found Dec 12 16:08:25 crc kubenswrapper[4693]: I1212 16:08:25.646412 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-clqpc" event={"ID":"fc2504e1-7808-49ef-9df0-2fda81f786f6","Type":"ContainerStarted","Data":"54b98c6dbd25497c4048c04e84500e34b4740053b2337dc6626729808844a89e"} Dec 12 16:08:25 crc kubenswrapper[4693]: E1212 16:08:25.648501 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-clqpc" podUID="fc2504e1-7808-49ef-9df0-2fda81f786f6" Dec 12 16:08:25 crc kubenswrapper[4693]: E1212 16:08:25.648701 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.188:5001/openstack-k8s-operators/telemetry-operator:e56ff59cddfcaaf1ce66e7783ae9c0344c631735\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf" podUID="26537316-7b55-48dc-b952-bc2220120194" Dec 12 16:08:26 crc kubenswrapper[4693]: I1212 16:08:26.076038 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fmkmwl\" (UID: \"51d89b29-7872-4e9d-9fdd-b1fdd7de6de3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" Dec 12 16:08:26 crc kubenswrapper[4693]: E1212 16:08:26.076256 4693 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 16:08:26 crc kubenswrapper[4693]: E1212 16:08:26.076318 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-cert podName:51d89b29-7872-4e9d-9fdd-b1fdd7de6de3 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:30.07630262 +0000 UTC m=+1337.244942221 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fmkmwl" (UID: "51d89b29-7872-4e9d-9fdd-b1fdd7de6de3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 16:08:26 crc kubenswrapper[4693]: I1212 16:08:26.282737 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-webhook-certs\") pod \"openstack-operator-controller-manager-565558f958-fnjh4\" (UID: \"ffd9365b-fb4c-4a2b-a168-fcf9cce89228\") " pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:26 crc kubenswrapper[4693]: E1212 16:08:26.282868 4693 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 12 16:08:26 crc kubenswrapper[4693]: E1212 16:08:26.282916 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-webhook-certs podName:ffd9365b-fb4c-4a2b-a168-fcf9cce89228 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:30.282900872 +0000 UTC m=+1337.451540473 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-webhook-certs") pod "openstack-operator-controller-manager-565558f958-fnjh4" (UID: "ffd9365b-fb4c-4a2b-a168-fcf9cce89228") : secret "webhook-server-cert" not found Dec 12 16:08:26 crc kubenswrapper[4693]: I1212 16:08:26.283667 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-metrics-certs\") pod \"openstack-operator-controller-manager-565558f958-fnjh4\" (UID: \"ffd9365b-fb4c-4a2b-a168-fcf9cce89228\") " pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:26 crc kubenswrapper[4693]: E1212 16:08:26.283753 4693 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 12 16:08:26 crc kubenswrapper[4693]: E1212 16:08:26.283779 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-metrics-certs podName:ffd9365b-fb4c-4a2b-a168-fcf9cce89228 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:30.283771645 +0000 UTC m=+1337.452411246 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-metrics-certs") pod "openstack-operator-controller-manager-565558f958-fnjh4" (UID: "ffd9365b-fb4c-4a2b-a168-fcf9cce89228") : secret "metrics-server-cert" not found Dec 12 16:08:26 crc kubenswrapper[4693]: E1212 16:08:26.672162 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-clqpc" podUID="fc2504e1-7808-49ef-9df0-2fda81f786f6" Dec 12 16:08:29 crc kubenswrapper[4693]: I1212 16:08:29.529494 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57409d4d-edf7-400c-9fcf-d6116ac22968-cert\") pod \"infra-operator-controller-manager-78d48bff9d-vbmgp\" (UID: \"57409d4d-edf7-400c-9fcf-d6116ac22968\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" Dec 12 16:08:29 crc kubenswrapper[4693]: E1212 16:08:29.529728 4693 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 12 16:08:29 crc kubenswrapper[4693]: E1212 16:08:29.529956 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57409d4d-edf7-400c-9fcf-d6116ac22968-cert podName:57409d4d-edf7-400c-9fcf-d6116ac22968 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:37.529938584 +0000 UTC m=+1344.698578185 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/57409d4d-edf7-400c-9fcf-d6116ac22968-cert") pod "infra-operator-controller-manager-78d48bff9d-vbmgp" (UID: "57409d4d-edf7-400c-9fcf-d6116ac22968") : secret "infra-operator-webhook-server-cert" not found Dec 12 16:08:30 crc kubenswrapper[4693]: I1212 16:08:30.112001 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fmkmwl\" (UID: \"51d89b29-7872-4e9d-9fdd-b1fdd7de6de3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" Dec 12 16:08:30 crc kubenswrapper[4693]: E1212 16:08:30.112210 4693 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 16:08:30 crc kubenswrapper[4693]: E1212 16:08:30.112354 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-cert podName:51d89b29-7872-4e9d-9fdd-b1fdd7de6de3 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:38.112334024 +0000 UTC m=+1345.280973625 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fmkmwl" (UID: "51d89b29-7872-4e9d-9fdd-b1fdd7de6de3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 16:08:30 crc kubenswrapper[4693]: I1212 16:08:30.317581 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-metrics-certs\") pod \"openstack-operator-controller-manager-565558f958-fnjh4\" (UID: \"ffd9365b-fb4c-4a2b-a168-fcf9cce89228\") " pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:30 crc kubenswrapper[4693]: I1212 16:08:30.317705 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-webhook-certs\") pod \"openstack-operator-controller-manager-565558f958-fnjh4\" (UID: \"ffd9365b-fb4c-4a2b-a168-fcf9cce89228\") " pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:30 crc kubenswrapper[4693]: E1212 16:08:30.317860 4693 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 12 16:08:30 crc kubenswrapper[4693]: E1212 16:08:30.317850 4693 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 12 16:08:30 crc kubenswrapper[4693]: E1212 16:08:30.317924 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-webhook-certs podName:ffd9365b-fb4c-4a2b-a168-fcf9cce89228 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:38.317908988 +0000 UTC m=+1345.486548579 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-webhook-certs") pod "openstack-operator-controller-manager-565558f958-fnjh4" (UID: "ffd9365b-fb4c-4a2b-a168-fcf9cce89228") : secret "webhook-server-cert" not found Dec 12 16:08:30 crc kubenswrapper[4693]: E1212 16:08:30.317995 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-metrics-certs podName:ffd9365b-fb4c-4a2b-a168-fcf9cce89228 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:38.317965279 +0000 UTC m=+1345.486604870 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-metrics-certs") pod "openstack-operator-controller-manager-565558f958-fnjh4" (UID: "ffd9365b-fb4c-4a2b-a168-fcf9cce89228") : secret "metrics-server-cert" not found Dec 12 16:08:37 crc kubenswrapper[4693]: I1212 16:08:37.568047 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57409d4d-edf7-400c-9fcf-d6116ac22968-cert\") pod \"infra-operator-controller-manager-78d48bff9d-vbmgp\" (UID: \"57409d4d-edf7-400c-9fcf-d6116ac22968\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" Dec 12 16:08:37 crc kubenswrapper[4693]: I1212 16:08:37.573493 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57409d4d-edf7-400c-9fcf-d6116ac22968-cert\") pod \"infra-operator-controller-manager-78d48bff9d-vbmgp\" (UID: \"57409d4d-edf7-400c-9fcf-d6116ac22968\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" Dec 12 16:08:37 crc kubenswrapper[4693]: I1212 16:08:37.701238 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" Dec 12 16:08:38 crc kubenswrapper[4693]: I1212 16:08:38.180085 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fmkmwl\" (UID: \"51d89b29-7872-4e9d-9fdd-b1fdd7de6de3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" Dec 12 16:08:38 crc kubenswrapper[4693]: E1212 16:08:38.180302 4693 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 16:08:38 crc kubenswrapper[4693]: E1212 16:08:38.180766 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-cert podName:51d89b29-7872-4e9d-9fdd-b1fdd7de6de3 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:54.180738613 +0000 UTC m=+1361.349378214 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fmkmwl" (UID: "51d89b29-7872-4e9d-9fdd-b1fdd7de6de3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 16:08:38 crc kubenswrapper[4693]: I1212 16:08:38.384058 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-webhook-certs\") pod \"openstack-operator-controller-manager-565558f958-fnjh4\" (UID: \"ffd9365b-fb4c-4a2b-a168-fcf9cce89228\") " pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:38 crc kubenswrapper[4693]: E1212 16:08:38.384388 4693 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 12 16:08:38 crc kubenswrapper[4693]: E1212 16:08:38.384543 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-webhook-certs podName:ffd9365b-fb4c-4a2b-a168-fcf9cce89228 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:54.384505798 +0000 UTC m=+1361.553145399 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-webhook-certs") pod "openstack-operator-controller-manager-565558f958-fnjh4" (UID: "ffd9365b-fb4c-4a2b-a168-fcf9cce89228") : secret "webhook-server-cert" not found Dec 12 16:08:38 crc kubenswrapper[4693]: I1212 16:08:38.385886 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-metrics-certs\") pod \"openstack-operator-controller-manager-565558f958-fnjh4\" (UID: \"ffd9365b-fb4c-4a2b-a168-fcf9cce89228\") " pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:38 crc kubenswrapper[4693]: E1212 16:08:38.386011 4693 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 12 16:08:38 crc kubenswrapper[4693]: E1212 16:08:38.386081 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-metrics-certs podName:ffd9365b-fb4c-4a2b-a168-fcf9cce89228 nodeName:}" failed. No retries permitted until 2025-12-12 16:08:54.38606036 +0000 UTC m=+1361.554699961 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-metrics-certs") pod "openstack-operator-controller-manager-565558f958-fnjh4" (UID: "ffd9365b-fb4c-4a2b-a168-fcf9cce89228") : secret "metrics-server-cert" not found Dec 12 16:08:48 crc kubenswrapper[4693]: E1212 16:08:48.741733 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87" Dec 12 16:08:48 crc kubenswrapper[4693]: E1212 16:08:48.742436 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9hqjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-967d97867-cdd8s_openstack-operators(8c0d2adb-6fec-4574-8733-b6e817a943e5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:08:50 crc kubenswrapper[4693]: E1212 16:08:50.136255 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" Dec 12 16:08:50 crc kubenswrapper[4693]: E1212 16:08:50.136753 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lwfbl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-lz95h_openstack-operators(267498f5-fa7b-44ec-bd94-361a261e8844): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:08:52 crc kubenswrapper[4693]: E1212 16:08:52.003110 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a" Dec 12 16:08:52 crc kubenswrapper[4693]: E1212 16:08:52.003741 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xl4j4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-697fb699cf-5w2f8_openstack-operators(0f300296-9b08-4fcc-9933-a752304b3188): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:08:52 crc kubenswrapper[4693]: E1212 16:08:52.767284 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea" Dec 12 16:08:52 crc kubenswrapper[4693]: E1212 16:08:52.767497 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m78r4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-tdrqj_openstack-operators(4fa52597-7870-4902-a274-6a4103c3630b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:08:54 crc kubenswrapper[4693]: I1212 16:08:54.271859 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fmkmwl\" (UID: \"51d89b29-7872-4e9d-9fdd-b1fdd7de6de3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" Dec 12 16:08:54 crc kubenswrapper[4693]: I1212 16:08:54.277354 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51d89b29-7872-4e9d-9fdd-b1fdd7de6de3-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fmkmwl\" (UID: \"51d89b29-7872-4e9d-9fdd-b1fdd7de6de3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" Dec 12 16:08:54 crc kubenswrapper[4693]: I1212 16:08:54.474549 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" Dec 12 16:08:54 crc kubenswrapper[4693]: I1212 16:08:54.475573 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-webhook-certs\") pod \"openstack-operator-controller-manager-565558f958-fnjh4\" (UID: \"ffd9365b-fb4c-4a2b-a168-fcf9cce89228\") " pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:54 crc kubenswrapper[4693]: I1212 16:08:54.475929 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-metrics-certs\") pod \"openstack-operator-controller-manager-565558f958-fnjh4\" (UID: \"ffd9365b-fb4c-4a2b-a168-fcf9cce89228\") " pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:54 crc kubenswrapper[4693]: I1212 16:08:54.481174 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-metrics-certs\") pod \"openstack-operator-controller-manager-565558f958-fnjh4\" (UID: \"ffd9365b-fb4c-4a2b-a168-fcf9cce89228\") " pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:54 crc kubenswrapper[4693]: I1212 16:08:54.481313 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ffd9365b-fb4c-4a2b-a168-fcf9cce89228-webhook-certs\") pod \"openstack-operator-controller-manager-565558f958-fnjh4\" (UID: \"ffd9365b-fb4c-4a2b-a168-fcf9cce89228\") " pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:54 crc kubenswrapper[4693]: I1212 16:08:54.748838 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:08:55 crc kubenswrapper[4693]: E1212 16:08:55.544630 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Dec 12 16:08:55 crc kubenswrapper[4693]: E1212 16:08:55.544938 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4575,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-9jcss_openstack-operators(a41df83d-6bb2-4c49-a431-f5851036a44d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:08:56 crc kubenswrapper[4693]: E1212 16:08:56.238663 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" Dec 12 16:08:56 crc kubenswrapper[4693]: E1212 16:08:56.239394 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rxv7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-nrfbt_openstack-operators(fe8f2a92-e87a-40d4-b96b-0e0af6443656): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:09:00 crc kubenswrapper[4693]: E1212 16:09:00.783528 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Dec 12 16:09:00 crc kubenswrapper[4693]: E1212 16:09:00.784246 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7hsfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-v852d_openstack-operators(96adb3fc-0bd6-44a0-9a3a-3bae3aa3a30c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:09:03 crc kubenswrapper[4693]: E1212 16:09:03.578890 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991" Dec 12 16:09:03 crc kubenswrapper[4693]: E1212 16:09:03.579897 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w49kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-wsr9h_openstack-operators(847f97b7-84be-4d2a-a699-30ca49fd1023): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:09:05 crc kubenswrapper[4693]: E1212 16:09:05.576689 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" Dec 12 16:09:05 crc kubenswrapper[4693]: E1212 16:09:05.577975 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6f8xk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-b86kc_openstack-operators(5616685d-71d7-49b9-8c1b-6eccc11a74a1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:09:06 crc kubenswrapper[4693]: E1212 16:09:06.308475 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a" Dec 12 16:09:06 crc kubenswrapper[4693]: E1212 16:09:06.308815 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l52xm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75944c9b7-clqpc_openstack-operators(fc2504e1-7808-49ef-9df0-2fda81f786f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:09:07 crc kubenswrapper[4693]: E1212 16:09:07.447525 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 12 16:09:07 crc kubenswrapper[4693]: E1212 16:09:07.447692 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pcbtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-c9ngm_openstack-operators(e1146df6-70f0-4074-9f67-668bdb272e1a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:09:07 crc kubenswrapper[4693]: E1212 16:09:07.449888 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9ngm" podUID="e1146df6-70f0-4074-9f67-668bdb272e1a" Dec 12 16:09:08 crc kubenswrapper[4693]: E1212 16:09:08.124525 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9ngm" podUID="e1146df6-70f0-4074-9f67-668bdb272e1a" Dec 12 16:09:11 crc kubenswrapper[4693]: E1212 16:09:11.640398 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 12 16:09:11 crc kubenswrapper[4693]: E1212 16:09:11.640963 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b9mxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-bkfnh_openstack-operators(4c46ca75-8071-4f2a-bda0-44bf851365cb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:09:12 crc kubenswrapper[4693]: E1212 16:09:12.194969 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 12 16:09:12 crc kubenswrapper[4693]: E1212 16:09:12.195482 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s684k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-w5zgk_openstack-operators(fc9969a7-d068-499a-90a4-571822a60c5b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:09:12 crc kubenswrapper[4693]: I1212 16:09:12.530132 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:09:12 crc kubenswrapper[4693]: I1212 16:09:12.530193 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:09:12 crc kubenswrapper[4693]: E1212 16:09:12.754751 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.188:5001/openstack-k8s-operators/telemetry-operator:e56ff59cddfcaaf1ce66e7783ae9c0344c631735" Dec 12 16:09:12 crc kubenswrapper[4693]: E1212 16:09:12.754810 4693 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.188:5001/openstack-k8s-operators/telemetry-operator:e56ff59cddfcaaf1ce66e7783ae9c0344c631735" Dec 12 16:09:12 crc kubenswrapper[4693]: E1212 16:09:12.754963 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.188:5001/openstack-k8s-operators/telemetry-operator:e56ff59cddfcaaf1ce66e7783ae9c0344c631735,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c97jr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6676c589bf-7kphf_openstack-operators(26537316-7b55-48dc-b952-bc2220120194): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:09:13 crc kubenswrapper[4693]: I1212 16:09:13.424388 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp"] Dec 12 16:09:13 crc kubenswrapper[4693]: I1212 16:09:13.492966 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4"] Dec 12 16:09:13 crc kubenswrapper[4693]: I1212 16:09:13.510250 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl"] Dec 12 16:09:14 crc kubenswrapper[4693]: W1212 16:09:14.108367 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57409d4d_edf7_400c_9fcf_d6116ac22968.slice/crio-53e2979f78fdb4bcd2c6c510c8fb9e1fd2a7688cdb42f421645fb94381a712be WatchSource:0}: Error finding container 53e2979f78fdb4bcd2c6c510c8fb9e1fd2a7688cdb42f421645fb94381a712be: Status 404 returned error can't find the container with id 53e2979f78fdb4bcd2c6c510c8fb9e1fd2a7688cdb42f421645fb94381a712be Dec 12 16:09:14 crc kubenswrapper[4693]: W1212 16:09:14.113995 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffd9365b_fb4c_4a2b_a168_fcf9cce89228.slice/crio-f99f761c0787674657e7f5b46f7f6065f961e941617d1e382c948f48dee5120e WatchSource:0}: Error finding container f99f761c0787674657e7f5b46f7f6065f961e941617d1e382c948f48dee5120e: Status 404 returned error can't find the container with id f99f761c0787674657e7f5b46f7f6065f961e941617d1e382c948f48dee5120e Dec 12 16:09:14 crc kubenswrapper[4693]: I1212 16:09:14.182958 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" event={"ID":"57409d4d-edf7-400c-9fcf-d6116ac22968","Type":"ContainerStarted","Data":"53e2979f78fdb4bcd2c6c510c8fb9e1fd2a7688cdb42f421645fb94381a712be"} Dec 12 16:09:14 crc kubenswrapper[4693]: I1212 16:09:14.184913 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" event={"ID":"51d89b29-7872-4e9d-9fdd-b1fdd7de6de3","Type":"ContainerStarted","Data":"dd59ff603a980fbe3811f3849166fe1d785e788e2295527d5176b29693e21382"} Dec 12 16:09:14 crc kubenswrapper[4693]: I1212 16:09:14.186693 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" event={"ID":"ffd9365b-fb4c-4a2b-a168-fcf9cce89228","Type":"ContainerStarted","Data":"f99f761c0787674657e7f5b46f7f6065f961e941617d1e382c948f48dee5120e"} Dec 12 16:09:15 crc kubenswrapper[4693]: I1212 16:09:15.209083 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-pt4rg" event={"ID":"f73b5773-7bac-41ae-af91-0e504b5a234f","Type":"ContainerStarted","Data":"e47db79c5087139ec93610247bbc9a3fc88933346d6ef74e7388d8dde083e67f"} Dec 12 16:09:15 crc kubenswrapper[4693]: I1212 16:09:15.219769 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-4dnpb" event={"ID":"463ba770-ab51-4445-8f63-bd5615ddb865","Type":"ContainerStarted","Data":"e1846ff0699fbb64337135f294c6511cb04695d6e10c6d4127c59685f5d36cef"} Dec 12 16:09:15 crc kubenswrapper[4693]: I1212 16:09:15.225871 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cg948" event={"ID":"f8bfea4b-063c-461e-9116-63d76fd06130","Type":"ContainerStarted","Data":"c22c5653dc046da9e9b49b7f8f590fd134ce41e1dd9d70d00df8a0a5d12b702e"} Dec 12 16:09:15 crc kubenswrapper[4693]: I1212 16:09:15.236183 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4wp87" event={"ID":"ac211615-b518-4011-be82-483cbb246d4b","Type":"ContainerStarted","Data":"5e23ca81cf63a77e3f12bf7b5606e98412a564f24d35edd158d97182c2089441"} Dec 12 16:09:15 crc kubenswrapper[4693]: I1212 16:09:15.240329 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-q7mdw" event={"ID":"f56863f1-3f85-4c6f-a2a6-81f0ee3b6317","Type":"ContainerStarted","Data":"2f94af0f17cefe50ab0aba345a03a2b51e7e81ba41c7151b1cc4d151e686966d"} Dec 12 16:09:15 crc kubenswrapper[4693]: I1212 16:09:15.249875 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-cb6hq" event={"ID":"b3dfd27f-9569-444d-a917-04c7f4c67ec9","Type":"ContainerStarted","Data":"56b0dd069779fe7ef004ed985116ed72c76b285bb8ff9bf25a87923955276e45"} Dec 12 16:09:17 crc kubenswrapper[4693]: I1212 16:09:17.267888 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" event={"ID":"ffd9365b-fb4c-4a2b-a168-fcf9cce89228","Type":"ContainerStarted","Data":"70df92f7b50c231ddfad12c27db0d0888ce8f713ad2eb3e35e7f47e9c136fc38"} Dec 12 16:09:17 crc kubenswrapper[4693]: I1212 16:09:17.268785 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:09:17 crc kubenswrapper[4693]: I1212 16:09:17.326297 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" podStartSLOduration=56.326263982 podStartE2EDuration="56.326263982s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:09:17.313486428 +0000 UTC m=+1384.482126059" watchObservedRunningTime="2025-12-12 16:09:17.326263982 +0000 UTC m=+1384.494903583" Dec 12 16:09:22 crc kubenswrapper[4693]: E1212 16:09:22.015535 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-clqpc" podUID="fc2504e1-7808-49ef-9df0-2fda81f786f6" Dec 12 16:09:22 crc kubenswrapper[4693]: E1212 16:09:22.021286 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nrfbt" podUID="fe8f2a92-e87a-40d4-b96b-0e0af6443656" Dec 12 16:09:22 crc kubenswrapper[4693]: E1212 16:09:22.021613 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-wsr9h" podUID="847f97b7-84be-4d2a-a699-30ca49fd1023" Dec 12 16:09:22 crc kubenswrapper[4693]: E1212 16:09:22.022145 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-lz95h" podUID="267498f5-fa7b-44ec-bd94-361a261e8844" Dec 12 16:09:22 crc kubenswrapper[4693]: E1212 16:09:22.022260 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-5w2f8" podUID="0f300296-9b08-4fcc-9933-a752304b3188" Dec 12 16:09:22 crc kubenswrapper[4693]: E1212 16:09:22.022725 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-967d97867-cdd8s" podUID="8c0d2adb-6fec-4574-8733-b6e817a943e5" Dec 12 16:09:22 crc kubenswrapper[4693]: E1212 16:09:22.023209 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9jcss" podUID="a41df83d-6bb2-4c49-a431-f5851036a44d" Dec 12 16:09:22 crc kubenswrapper[4693]: I1212 16:09:22.334631 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-lz95h" event={"ID":"267498f5-fa7b-44ec-bd94-361a261e8844","Type":"ContainerStarted","Data":"473fe7eeac26952485a82ee9cd6ee4480ef23493895e5fe5c4f18d5615f51198"} Dec 12 16:09:22 crc kubenswrapper[4693]: E1212 16:09:22.335964 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w5zgk" podUID="fc9969a7-d068-499a-90a4-571822a60c5b" Dec 12 16:09:22 crc kubenswrapper[4693]: I1212 16:09:22.345167 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-clqpc" event={"ID":"fc2504e1-7808-49ef-9df0-2fda81f786f6","Type":"ContainerStarted","Data":"d89f15c8d2917c719553e397698ea761db6b7d50d696a80ad5ca2a877f4f8ee9"} Dec 12 16:09:22 crc kubenswrapper[4693]: I1212 16:09:22.354067 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-wsr9h" event={"ID":"847f97b7-84be-4d2a-a699-30ca49fd1023","Type":"ContainerStarted","Data":"33a5c78b1341a05b0861afc99c4809abaa7839aeb54406e320fcef15cc2def0c"} Dec 12 16:09:22 crc kubenswrapper[4693]: I1212 16:09:22.358919 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-cdd8s" event={"ID":"8c0d2adb-6fec-4574-8733-b6e817a943e5","Type":"ContainerStarted","Data":"0d06020bc06ed7b9c4a76d5f5cf0540ad06099fe019ae5c39b161ef277aadc5d"} Dec 12 16:09:22 crc kubenswrapper[4693]: I1212 16:09:22.362425 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-5w2f8" event={"ID":"0f300296-9b08-4fcc-9933-a752304b3188","Type":"ContainerStarted","Data":"d99728bf37a6cfae7f3ffb44799df8333546ca85316ebb6321cd7f4c039bdade"} Dec 12 16:09:22 crc kubenswrapper[4693]: I1212 16:09:22.366304 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9jcss" event={"ID":"a41df83d-6bb2-4c49-a431-f5851036a44d","Type":"ContainerStarted","Data":"c98e9cfe2ee5121957fc7fa6773c679fd80c471aa331228f009b46b374a249a6"} Dec 12 16:09:22 crc kubenswrapper[4693]: I1212 16:09:22.368958 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" event={"ID":"57409d4d-edf7-400c-9fcf-d6116ac22968","Type":"ContainerStarted","Data":"9ef060955417af15735d0763dfa8713e44decfb86a6cb41c446089eb734faada"} Dec 12 16:09:22 crc kubenswrapper[4693]: I1212 16:09:22.369014 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" event={"ID":"57409d4d-edf7-400c-9fcf-d6116ac22968","Type":"ContainerStarted","Data":"e2f00db4e1f592e32c09e6d1ddfb5f848467945c8e853e8cc87ea70f513f722a"} Dec 12 16:09:22 crc kubenswrapper[4693]: I1212 16:09:22.369506 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" Dec 12 16:09:22 crc kubenswrapper[4693]: I1212 16:09:22.378609 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nrfbt" event={"ID":"fe8f2a92-e87a-40d4-b96b-0e0af6443656","Type":"ContainerStarted","Data":"7d7136681a21e6009ded947be98284c016ea40fb3a3776658347ec2fb0544e7a"} Dec 12 16:09:22 crc kubenswrapper[4693]: I1212 16:09:22.383036 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w5zgk" event={"ID":"fc9969a7-d068-499a-90a4-571822a60c5b","Type":"ContainerStarted","Data":"389c3c5a8c6d14f1cbc08256bdb83f9f5cd958a7f2befdbaf3fab1998a524d42"} Dec 12 16:09:22 crc kubenswrapper[4693]: I1212 16:09:22.399783 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-4dnpb" event={"ID":"463ba770-ab51-4445-8f63-bd5615ddb865","Type":"ContainerStarted","Data":"b3e76772a1f2392dcc57a191a4e22fcf9e1439628332be5b866c4a4890cacd6e"} Dec 12 16:09:22 crc kubenswrapper[4693]: I1212 16:09:22.401077 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-4dnpb" Dec 12 16:09:22 crc kubenswrapper[4693]: I1212 16:09:22.408030 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-4dnpb" Dec 12 16:09:22 crc kubenswrapper[4693]: E1212 16:09:22.432133 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-clqpc" podUID="fc2504e1-7808-49ef-9df0-2fda81f786f6" Dec 12 16:09:22 crc kubenswrapper[4693]: I1212 16:09:22.546309 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-4dnpb" podStartSLOduration=4.846335892 podStartE2EDuration="1m1.546285334s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="2025-12-12 16:08:24.522682112 +0000 UTC m=+1331.691321713" lastFinishedPulling="2025-12-12 16:09:21.222631554 +0000 UTC m=+1388.391271155" observedRunningTime="2025-12-12 16:09:22.543646133 +0000 UTC m=+1389.712285744" watchObservedRunningTime="2025-12-12 16:09:22.546285334 +0000 UTC m=+1389.714924925" Dec 12 16:09:22 crc kubenswrapper[4693]: I1212 16:09:22.608812 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" podStartSLOduration=54.62852244 podStartE2EDuration="1m1.608785695s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="2025-12-12 16:09:14.111631534 +0000 UTC m=+1381.280271135" lastFinishedPulling="2025-12-12 16:09:21.091894769 +0000 UTC m=+1388.260534390" observedRunningTime="2025-12-12 16:09:22.590545604 +0000 UTC m=+1389.759185215" watchObservedRunningTime="2025-12-12 16:09:22.608785695 +0000 UTC m=+1389.777425306" Dec 12 16:09:22 crc kubenswrapper[4693]: E1212 16:09:22.787642 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bkfnh" podUID="4c46ca75-8071-4f2a-bda0-44bf851365cb" Dec 12 16:09:23 crc kubenswrapper[4693]: E1212 16:09:23.167805 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf" podUID="26537316-7b55-48dc-b952-bc2220120194" Dec 12 16:09:23 crc kubenswrapper[4693]: E1212 16:09:23.254361 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tdrqj" podUID="4fa52597-7870-4902-a274-6a4103c3630b" Dec 12 16:09:23 crc kubenswrapper[4693]: I1212 16:09:23.415924 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bkfnh" event={"ID":"4c46ca75-8071-4f2a-bda0-44bf851365cb","Type":"ContainerStarted","Data":"3e28589c3e79a3e81d9f626c5bd1a26e7df38c27950da3bd8eab3d80919adc55"} Dec 12 16:09:23 crc kubenswrapper[4693]: I1212 16:09:23.430911 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tdrqj" event={"ID":"4fa52597-7870-4902-a274-6a4103c3630b","Type":"ContainerStarted","Data":"fe6162ddce168ec5b9207121d88506b4ca9051d2b4505fbe89bfbe4059e2c41b"} Dec 12 16:09:23 crc kubenswrapper[4693]: I1212 16:09:23.447537 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" event={"ID":"51d89b29-7872-4e9d-9fdd-b1fdd7de6de3","Type":"ContainerStarted","Data":"8d9667da20cfa41d0668bfc7bfae77ac66975b880450dd568f740c721c603ee3"} Dec 12 16:09:23 crc kubenswrapper[4693]: I1212 16:09:23.447580 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" event={"ID":"51d89b29-7872-4e9d-9fdd-b1fdd7de6de3","Type":"ContainerStarted","Data":"2e6f3ae0c36a0ae0ceda423f7405ce23de5ba04d7bb285e211d26f4189b6fd30"} Dec 12 16:09:23 crc kubenswrapper[4693]: I1212 16:09:23.448308 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" Dec 12 16:09:23 crc kubenswrapper[4693]: I1212 16:09:23.459111 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4wp87" event={"ID":"ac211615-b518-4011-be82-483cbb246d4b","Type":"ContainerStarted","Data":"bc35b766b1fdd7912c45a658862c7336d470910b9df3500a365ccb689cb236e9"} Dec 12 16:09:23 crc kubenswrapper[4693]: I1212 16:09:23.459561 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4wp87" Dec 12 16:09:23 crc kubenswrapper[4693]: I1212 16:09:23.463085 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf" event={"ID":"26537316-7b55-48dc-b952-bc2220120194","Type":"ContainerStarted","Data":"a034f3525c49fae92aea824764a8e0d81ca7f4aeca07ebe1083fd42525b4fe0c"} Dec 12 16:09:23 crc kubenswrapper[4693]: I1212 16:09:23.463959 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4wp87" Dec 12 16:09:23 crc kubenswrapper[4693]: I1212 16:09:23.467578 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9ngm" event={"ID":"e1146df6-70f0-4074-9f67-668bdb272e1a","Type":"ContainerStarted","Data":"ef17aa2cd783ae99271b7e1e9e3d5e4773ea41e0e6e074f6fb1a70984873d8d3"} Dec 12 16:09:23 crc kubenswrapper[4693]: I1212 16:09:23.471707 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-pt4rg" event={"ID":"f73b5773-7bac-41ae-af91-0e504b5a234f","Type":"ContainerStarted","Data":"5e4feb4d7e373e80e869b7a4e38a990742a1e8e311bb722f2dbcd07593da350a"} Dec 12 16:09:23 crc kubenswrapper[4693]: I1212 16:09:23.472037 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-pt4rg" Dec 12 16:09:23 crc kubenswrapper[4693]: I1212 16:09:23.479539 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-pt4rg" Dec 12 16:09:23 crc kubenswrapper[4693]: I1212 16:09:23.480580 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cg948" event={"ID":"f8bfea4b-063c-461e-9116-63d76fd06130","Type":"ContainerStarted","Data":"ddfcf720da594ffe4517a577a51d9560d22d76c03d9e0f3d04d679e190792494"} Dec 12 16:09:23 crc kubenswrapper[4693]: I1212 16:09:23.481551 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cg948" Dec 12 16:09:23 crc kubenswrapper[4693]: I1212 16:09:23.484050 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cg948" Dec 12 16:09:23 crc kubenswrapper[4693]: I1212 16:09:23.484746 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-cb6hq" event={"ID":"b3dfd27f-9569-444d-a917-04c7f4c67ec9","Type":"ContainerStarted","Data":"74add399e5da7bd63abe47250f65f4e95628cedd7a39f71b10ba0f7d01aefa3f"} Dec 12 16:09:23 crc kubenswrapper[4693]: I1212 16:09:23.524821 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" podStartSLOduration=54.208413451 podStartE2EDuration="1m2.524795491s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="2025-12-12 16:09:14.144640471 +0000 UTC m=+1381.313280072" lastFinishedPulling="2025-12-12 16:09:22.461022521 +0000 UTC m=+1389.629662112" observedRunningTime="2025-12-12 16:09:23.508913824 +0000 UTC m=+1390.677553425" watchObservedRunningTime="2025-12-12 16:09:23.524795491 +0000 UTC m=+1390.693435102" Dec 12 16:09:23 crc kubenswrapper[4693]: E1212 16:09:23.571805 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.188:5001/openstack-k8s-operators/telemetry-operator:e56ff59cddfcaaf1ce66e7783ae9c0344c631735\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf" podUID="26537316-7b55-48dc-b952-bc2220120194" Dec 12 16:09:23 crc kubenswrapper[4693]: I1212 16:09:23.672722 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4wp87" podStartSLOduration=2.5008687370000002 podStartE2EDuration="1m2.672694879s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="2025-12-12 16:08:22.588685164 +0000 UTC m=+1329.757324765" lastFinishedPulling="2025-12-12 16:09:22.760511306 +0000 UTC m=+1389.929150907" observedRunningTime="2025-12-12 16:09:23.595137433 +0000 UTC m=+1390.763777054" watchObservedRunningTime="2025-12-12 16:09:23.672694879 +0000 UTC m=+1390.841334490" Dec 12 16:09:23 crc kubenswrapper[4693]: I1212 16:09:23.676728 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9ngm" podStartSLOduration=4.741652397 podStartE2EDuration="1m2.676707097s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="2025-12-12 16:08:24.526203187 +0000 UTC m=+1331.694842788" lastFinishedPulling="2025-12-12 16:09:22.461257887 +0000 UTC m=+1389.629897488" observedRunningTime="2025-12-12 16:09:23.624560085 +0000 UTC m=+1390.793199686" watchObservedRunningTime="2025-12-12 16:09:23.676707097 +0000 UTC m=+1390.845346698" Dec 12 16:09:23 crc kubenswrapper[4693]: I1212 16:09:23.696834 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-cb6hq" podStartSLOduration=4.178634017 podStartE2EDuration="1m2.696809738s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="2025-12-12 16:08:24.010312624 +0000 UTC m=+1331.178952225" lastFinishedPulling="2025-12-12 16:09:22.528488345 +0000 UTC m=+1389.697127946" observedRunningTime="2025-12-12 16:09:23.684371273 +0000 UTC m=+1390.853010894" watchObservedRunningTime="2025-12-12 16:09:23.696809738 +0000 UTC m=+1390.865449349" Dec 12 16:09:23 crc kubenswrapper[4693]: I1212 16:09:23.733808 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cg948" podStartSLOduration=4.121798579 podStartE2EDuration="1m2.733790582s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="2025-12-12 16:08:23.916543844 +0000 UTC m=+1331.085183445" lastFinishedPulling="2025-12-12 16:09:22.528535847 +0000 UTC m=+1389.697175448" observedRunningTime="2025-12-12 16:09:23.723918007 +0000 UTC m=+1390.892557608" watchObservedRunningTime="2025-12-12 16:09:23.733790582 +0000 UTC m=+1390.902430183" Dec 12 16:09:24 crc kubenswrapper[4693]: E1212 16:09:24.022700 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" podUID="5616685d-71d7-49b9-8c1b-6eccc11a74a1" Dec 12 16:09:24 crc kubenswrapper[4693]: I1212 16:09:24.515651 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-cdd8s" event={"ID":"8c0d2adb-6fec-4574-8733-b6e817a943e5","Type":"ContainerStarted","Data":"81d658fedadc442c02a9a59bf8446a19442f00ede51a99032702807c85ff4d65"} Dec 12 16:09:24 crc kubenswrapper[4693]: I1212 16:09:24.517090 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-967d97867-cdd8s" Dec 12 16:09:24 crc kubenswrapper[4693]: I1212 16:09:24.535450 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nrfbt" event={"ID":"fe8f2a92-e87a-40d4-b96b-0e0af6443656","Type":"ContainerStarted","Data":"296f286281a7411ee64fcf0c6d8f9c94e74bff3a98bb05dece96c22a0b72ef14"} Dec 12 16:09:24 crc kubenswrapper[4693]: I1212 16:09:24.536355 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nrfbt" Dec 12 16:09:24 crc kubenswrapper[4693]: I1212 16:09:24.550944 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-pt4rg" podStartSLOduration=4.426033285 podStartE2EDuration="1m3.550923649s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="2025-12-12 16:08:23.824331247 +0000 UTC m=+1330.992970848" lastFinishedPulling="2025-12-12 16:09:22.949221611 +0000 UTC m=+1390.117861212" observedRunningTime="2025-12-12 16:09:23.773566602 +0000 UTC m=+1390.942206203" watchObservedRunningTime="2025-12-12 16:09:24.550923649 +0000 UTC m=+1391.719563250" Dec 12 16:09:24 crc kubenswrapper[4693]: I1212 16:09:24.560819 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w5zgk" event={"ID":"fc9969a7-d068-499a-90a4-571822a60c5b","Type":"ContainerStarted","Data":"cc468b15567a5f30ae49844f240bf8fb5335c84ae92d0684ae50d3fb0fe6c72c"} Dec 12 16:09:24 crc kubenswrapper[4693]: I1212 16:09:24.561737 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w5zgk" Dec 12 16:09:24 crc kubenswrapper[4693]: I1212 16:09:24.563426 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9jcss" event={"ID":"a41df83d-6bb2-4c49-a431-f5851036a44d","Type":"ContainerStarted","Data":"407944a20bacde363ae3e0b0970210522ae53361922a1667ebc7424a8abcc44c"} Dec 12 16:09:24 crc kubenswrapper[4693]: I1212 16:09:24.563875 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9jcss" Dec 12 16:09:24 crc kubenswrapper[4693]: I1212 16:09:24.566023 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" event={"ID":"5616685d-71d7-49b9-8c1b-6eccc11a74a1","Type":"ContainerStarted","Data":"4893d1822dd68c74b63fa27cf7d6987278ab47a3e248f90eb319be5bae78080d"} Dec 12 16:09:24 crc kubenswrapper[4693]: I1212 16:09:24.568623 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-cb6hq" Dec 12 16:09:24 crc kubenswrapper[4693]: I1212 16:09:24.572653 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-cb6hq" Dec 12 16:09:24 crc kubenswrapper[4693]: I1212 16:09:24.580147 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-967d97867-cdd8s" podStartSLOduration=4.135196569 podStartE2EDuration="1m3.580119385s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="2025-12-12 16:08:23.979567898 +0000 UTC m=+1331.148207499" lastFinishedPulling="2025-12-12 16:09:23.424490714 +0000 UTC m=+1390.593130315" observedRunningTime="2025-12-12 16:09:24.55278511 +0000 UTC m=+1391.721424731" watchObservedRunningTime="2025-12-12 16:09:24.580119385 +0000 UTC m=+1391.748758996" Dec 12 16:09:24 crc kubenswrapper[4693]: I1212 16:09:24.582567 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nrfbt" podStartSLOduration=4.10545774 podStartE2EDuration="1m3.58254986s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="2025-12-12 16:08:23.948823442 +0000 UTC m=+1331.117463043" lastFinishedPulling="2025-12-12 16:09:23.425915562 +0000 UTC m=+1390.594555163" observedRunningTime="2025-12-12 16:09:24.574430082 +0000 UTC m=+1391.743069693" watchObservedRunningTime="2025-12-12 16:09:24.58254986 +0000 UTC m=+1391.751189471" Dec 12 16:09:24 crc kubenswrapper[4693]: I1212 16:09:24.631834 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9jcss" podStartSLOduration=3.703881328 podStartE2EDuration="1m3.631819555s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="2025-12-12 16:08:23.999784581 +0000 UTC m=+1331.168424182" lastFinishedPulling="2025-12-12 16:09:23.927722808 +0000 UTC m=+1391.096362409" observedRunningTime="2025-12-12 16:09:24.629950825 +0000 UTC m=+1391.798590426" watchObservedRunningTime="2025-12-12 16:09:24.631819555 +0000 UTC m=+1391.800459156" Dec 12 16:09:24 crc kubenswrapper[4693]: I1212 16:09:24.635018 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w5zgk" podStartSLOduration=4.220751399 podStartE2EDuration="1m3.635005531s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="2025-12-12 16:08:24.013969552 +0000 UTC m=+1331.182609153" lastFinishedPulling="2025-12-12 16:09:23.428223674 +0000 UTC m=+1390.596863285" observedRunningTime="2025-12-12 16:09:24.601641634 +0000 UTC m=+1391.770281255" watchObservedRunningTime="2025-12-12 16:09:24.635005531 +0000 UTC m=+1391.803645132" Dec 12 16:09:24 crc kubenswrapper[4693]: I1212 16:09:24.780007 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 16:09:25 crc kubenswrapper[4693]: E1212 16:09:25.122344 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d" podUID="96adb3fc-0bd6-44a0-9a3a-3bae3aa3a30c" Dec 12 16:09:25 crc kubenswrapper[4693]: I1212 16:09:25.578642 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-wsr9h" event={"ID":"847f97b7-84be-4d2a-a699-30ca49fd1023","Type":"ContainerStarted","Data":"3d248498d77973f3f842650d53365bea90630c463e6897630c087dee1ee9653a"} Dec 12 16:09:25 crc kubenswrapper[4693]: I1212 16:09:25.578708 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-wsr9h" Dec 12 16:09:25 crc kubenswrapper[4693]: I1212 16:09:25.580571 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tdrqj" event={"ID":"4fa52597-7870-4902-a274-6a4103c3630b","Type":"ContainerStarted","Data":"8d7c3212841d10c3b1ba24b2b0a2f729a5f8790803fc901802945f5292c9caeb"} Dec 12 16:09:25 crc kubenswrapper[4693]: I1212 16:09:25.582556 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-q7mdw" event={"ID":"f56863f1-3f85-4c6f-a2a6-81f0ee3b6317","Type":"ContainerStarted","Data":"ae551786309c702cc2ebb3dd7dc4b99c47caa3bfe0f97644da712d6b817970ce"} Dec 12 16:09:25 crc kubenswrapper[4693]: I1212 16:09:25.582865 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-q7mdw" Dec 12 16:09:25 crc kubenswrapper[4693]: I1212 16:09:25.587713 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-5w2f8" event={"ID":"0f300296-9b08-4fcc-9933-a752304b3188","Type":"ContainerStarted","Data":"950020709c74150fd3e25fd5260adb7c0e622256d5a79a83db040a7d3cc701e6"} Dec 12 16:09:25 crc kubenswrapper[4693]: I1212 16:09:25.587879 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-5w2f8" Dec 12 16:09:25 crc kubenswrapper[4693]: I1212 16:09:25.594050 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-lz95h" event={"ID":"267498f5-fa7b-44ec-bd94-361a261e8844","Type":"ContainerStarted","Data":"44252bd97888351fd64ef26df7ecdfe8fb5ba9a479f60f16755c7642c57239cc"} Dec 12 16:09:25 crc kubenswrapper[4693]: I1212 16:09:25.594189 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-lz95h" Dec 12 16:09:25 crc kubenswrapper[4693]: I1212 16:09:25.596628 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d" event={"ID":"96adb3fc-0bd6-44a0-9a3a-3bae3aa3a30c","Type":"ContainerStarted","Data":"df9dd602c547a1064aedeac2391f2b4c2560c9c3043fb9b06e72f518efda0cde"} Dec 12 16:09:25 crc kubenswrapper[4693]: I1212 16:09:25.598775 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bkfnh" event={"ID":"4c46ca75-8071-4f2a-bda0-44bf851365cb","Type":"ContainerStarted","Data":"418423419bc593b2f3145da6c54d0dbf99b9beb64926b1c1309a7b666557eb6a"} Dec 12 16:09:25 crc kubenswrapper[4693]: I1212 16:09:25.608989 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-q7mdw" Dec 12 16:09:25 crc kubenswrapper[4693]: I1212 16:09:25.657924 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-q7mdw" podStartSLOduration=4.060645656 podStartE2EDuration="1m4.657898061s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="2025-12-12 16:08:23.49134926 +0000 UTC m=+1330.659988871" lastFinishedPulling="2025-12-12 16:09:24.088601675 +0000 UTC m=+1391.257241276" observedRunningTime="2025-12-12 16:09:25.652696931 +0000 UTC m=+1392.821336532" watchObservedRunningTime="2025-12-12 16:09:25.657898061 +0000 UTC m=+1392.826537672" Dec 12 16:09:25 crc kubenswrapper[4693]: I1212 16:09:25.661758 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-wsr9h" podStartSLOduration=5.158962369 podStartE2EDuration="1m4.661741394s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="2025-12-12 16:08:24.515730945 +0000 UTC m=+1331.684370546" lastFinishedPulling="2025-12-12 16:09:24.01850997 +0000 UTC m=+1391.187149571" observedRunningTime="2025-12-12 16:09:25.618690496 +0000 UTC m=+1392.787330097" watchObservedRunningTime="2025-12-12 16:09:25.661741394 +0000 UTC m=+1392.830380995" Dec 12 16:09:25 crc kubenswrapper[4693]: I1212 16:09:25.702732 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-lz95h" podStartSLOduration=4.250615425 podStartE2EDuration="1m4.702713096s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="2025-12-12 16:08:23.486370756 +0000 UTC m=+1330.655010357" lastFinishedPulling="2025-12-12 16:09:23.938468427 +0000 UTC m=+1391.107108028" observedRunningTime="2025-12-12 16:09:25.689393048 +0000 UTC m=+1392.858032649" watchObservedRunningTime="2025-12-12 16:09:25.702713096 +0000 UTC m=+1392.871352697" Dec 12 16:09:25 crc kubenswrapper[4693]: I1212 16:09:25.778250 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-5w2f8" podStartSLOduration=2.896042369 podStartE2EDuration="1m4.778231917s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="2025-12-12 16:08:22.857263511 +0000 UTC m=+1330.025903102" lastFinishedPulling="2025-12-12 16:09:24.739453049 +0000 UTC m=+1391.908092650" observedRunningTime="2025-12-12 16:09:25.771142567 +0000 UTC m=+1392.939782168" watchObservedRunningTime="2025-12-12 16:09:25.778231917 +0000 UTC m=+1392.946871518" Dec 12 16:09:26 crc kubenswrapper[4693]: I1212 16:09:26.608482 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" event={"ID":"5616685d-71d7-49b9-8c1b-6eccc11a74a1","Type":"ContainerStarted","Data":"8806d148894154b8e08ea7ca80d695a5b54c3bb76348728810c0c0f3d36a84f2"} Dec 12 16:09:26 crc kubenswrapper[4693]: I1212 16:09:26.608850 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" Dec 12 16:09:26 crc kubenswrapper[4693]: I1212 16:09:26.610788 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d" event={"ID":"96adb3fc-0bd6-44a0-9a3a-3bae3aa3a30c","Type":"ContainerStarted","Data":"b3a8a7dfec531bbd1782fdace0f56de8b1459733c11e3220731632ffeb2c553f"} Dec 12 16:09:26 crc kubenswrapper[4693]: I1212 16:09:26.658374 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" podStartSLOduration=4.712830061 podStartE2EDuration="1m5.658354919s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="2025-12-12 16:08:24.553977153 +0000 UTC m=+1331.722616754" lastFinishedPulling="2025-12-12 16:09:25.499502011 +0000 UTC m=+1392.668141612" observedRunningTime="2025-12-12 16:09:26.644505636 +0000 UTC m=+1393.813145247" watchObservedRunningTime="2025-12-12 16:09:26.658354919 +0000 UTC m=+1393.826994530" Dec 12 16:09:26 crc kubenswrapper[4693]: I1212 16:09:26.678858 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bkfnh" podStartSLOduration=4.91977054 podStartE2EDuration="1m5.678840409s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="2025-12-12 16:08:23.919420341 +0000 UTC m=+1331.088059942" lastFinishedPulling="2025-12-12 16:09:24.67849021 +0000 UTC m=+1391.847129811" observedRunningTime="2025-12-12 16:09:26.677249467 +0000 UTC m=+1393.845889068" watchObservedRunningTime="2025-12-12 16:09:26.678840409 +0000 UTC m=+1393.847480000" Dec 12 16:09:26 crc kubenswrapper[4693]: I1212 16:09:26.711790 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d" podStartSLOduration=4.093440253 podStartE2EDuration="1m5.711770335s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="2025-12-12 16:08:24.521253084 +0000 UTC m=+1331.689892685" lastFinishedPulling="2025-12-12 16:09:26.139583166 +0000 UTC m=+1393.308222767" observedRunningTime="2025-12-12 16:09:26.704118809 +0000 UTC m=+1393.872758410" watchObservedRunningTime="2025-12-12 16:09:26.711770335 +0000 UTC m=+1393.880409946" Dec 12 16:09:26 crc kubenswrapper[4693]: I1212 16:09:26.751793 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tdrqj" podStartSLOduration=3.911503039 podStartE2EDuration="1m5.751774311s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="2025-12-12 16:08:22.89854942 +0000 UTC m=+1330.067189021" lastFinishedPulling="2025-12-12 16:09:24.738820682 +0000 UTC m=+1391.907460293" observedRunningTime="2025-12-12 16:09:26.744911527 +0000 UTC m=+1393.913551128" watchObservedRunningTime="2025-12-12 16:09:26.751774311 +0000 UTC m=+1393.920413912" Dec 12 16:09:27 crc kubenswrapper[4693]: I1212 16:09:27.619757 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d" Dec 12 16:09:27 crc kubenswrapper[4693]: I1212 16:09:27.707746 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" Dec 12 16:09:31 crc kubenswrapper[4693]: I1212 16:09:31.533898 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tdrqj" Dec 12 16:09:31 crc kubenswrapper[4693]: I1212 16:09:31.536434 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tdrqj" Dec 12 16:09:31 crc kubenswrapper[4693]: I1212 16:09:31.725048 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-5w2f8" Dec 12 16:09:31 crc kubenswrapper[4693]: I1212 16:09:31.778781 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-lz95h" Dec 12 16:09:31 crc kubenswrapper[4693]: I1212 16:09:31.796596 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nrfbt" Dec 12 16:09:32 crc kubenswrapper[4693]: I1212 16:09:32.147334 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-967d97867-cdd8s" Dec 12 16:09:32 crc kubenswrapper[4693]: I1212 16:09:32.182881 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bkfnh" Dec 12 16:09:32 crc kubenswrapper[4693]: I1212 16:09:32.187350 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bkfnh" Dec 12 16:09:32 crc kubenswrapper[4693]: I1212 16:09:32.420058 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w5zgk" Dec 12 16:09:32 crc kubenswrapper[4693]: I1212 16:09:32.468307 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9jcss" Dec 12 16:09:32 crc kubenswrapper[4693]: I1212 16:09:32.541032 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d" Dec 12 16:09:32 crc kubenswrapper[4693]: I1212 16:09:32.741926 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-wsr9h" Dec 12 16:09:32 crc kubenswrapper[4693]: I1212 16:09:32.790699 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" Dec 12 16:09:34 crc kubenswrapper[4693]: I1212 16:09:34.481709 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" Dec 12 16:09:37 crc kubenswrapper[4693]: I1212 16:09:37.740071 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-clqpc" event={"ID":"fc2504e1-7808-49ef-9df0-2fda81f786f6","Type":"ContainerStarted","Data":"dbd10569f7593b4fb2dca1c767ec4cf4839d4b130da53409664520fc2922ca79"} Dec 12 16:09:37 crc kubenswrapper[4693]: I1212 16:09:37.741419 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-clqpc" Dec 12 16:09:37 crc kubenswrapper[4693]: I1212 16:09:37.776440 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-clqpc" podStartSLOduration=4.419318345 podStartE2EDuration="1m16.776410769s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="2025-12-12 16:08:24.607761158 +0000 UTC m=+1331.776400759" lastFinishedPulling="2025-12-12 16:09:36.964853582 +0000 UTC m=+1404.133493183" observedRunningTime="2025-12-12 16:09:37.77088344 +0000 UTC m=+1404.939523031" watchObservedRunningTime="2025-12-12 16:09:37.776410769 +0000 UTC m=+1404.945050370" Dec 12 16:09:38 crc kubenswrapper[4693]: I1212 16:09:38.752263 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf" event={"ID":"26537316-7b55-48dc-b952-bc2220120194","Type":"ContainerStarted","Data":"f1ce122a71b4d8aecfcffa03791a777791b0d08bbe628f3bb4992c1d9a31da69"} Dec 12 16:09:38 crc kubenswrapper[4693]: I1212 16:09:38.752838 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf" Dec 12 16:09:38 crc kubenswrapper[4693]: I1212 16:09:38.774445 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf" podStartSLOduration=3.808728974 podStartE2EDuration="1m17.774422361s" podCreationTimestamp="2025-12-12 16:08:21 +0000 UTC" firstStartedPulling="2025-12-12 16:08:24.554664251 +0000 UTC m=+1331.723303862" lastFinishedPulling="2025-12-12 16:09:38.520357648 +0000 UTC m=+1405.688997249" observedRunningTime="2025-12-12 16:09:38.773507586 +0000 UTC m=+1405.942147177" watchObservedRunningTime="2025-12-12 16:09:38.774422361 +0000 UTC m=+1405.943061962" Dec 12 16:09:42 crc kubenswrapper[4693]: I1212 16:09:42.530730 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:09:42 crc kubenswrapper[4693]: I1212 16:09:42.532295 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:09:42 crc kubenswrapper[4693]: I1212 16:09:42.914008 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-clqpc" Dec 12 16:09:52 crc kubenswrapper[4693]: I1212 16:09:52.840698 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.370561 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xmjz8"] Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.374476 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xmjz8" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.376638 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-nvwdl" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.378564 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.378905 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.379020 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.401347 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xmjz8"] Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.467797 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74bcl\" (UniqueName: \"kubernetes.io/projected/53107282-9d7c-4944-9da2-8efd5ef1500d-kube-api-access-74bcl\") pod \"dnsmasq-dns-675f4bcbfc-xmjz8\" (UID: \"53107282-9d7c-4944-9da2-8efd5ef1500d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xmjz8" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.467858 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53107282-9d7c-4944-9da2-8efd5ef1500d-config\") pod \"dnsmasq-dns-675f4bcbfc-xmjz8\" (UID: \"53107282-9d7c-4944-9da2-8efd5ef1500d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xmjz8" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.481808 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-46hld"] Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.484346 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-46hld" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.486000 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.502701 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-46hld"] Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.533684 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.533746 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.533799 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.534584 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fa5a22453d813e4ad162e4fc8b28463dbad032801eec3a25e1c47d7ec02c9b9a"} pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.534651 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" containerID="cri-o://fa5a22453d813e4ad162e4fc8b28463dbad032801eec3a25e1c47d7ec02c9b9a" gracePeriod=600 Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.569194 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c14f1710-5bb5-4333-b584-d5bff01ec285-config\") pod \"dnsmasq-dns-78dd6ddcc-46hld\" (UID: \"c14f1710-5bb5-4333-b584-d5bff01ec285\") " pod="openstack/dnsmasq-dns-78dd6ddcc-46hld" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.569388 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74bcl\" (UniqueName: \"kubernetes.io/projected/53107282-9d7c-4944-9da2-8efd5ef1500d-kube-api-access-74bcl\") pod \"dnsmasq-dns-675f4bcbfc-xmjz8\" (UID: \"53107282-9d7c-4944-9da2-8efd5ef1500d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xmjz8" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.569434 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c14f1710-5bb5-4333-b584-d5bff01ec285-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-46hld\" (UID: \"c14f1710-5bb5-4333-b584-d5bff01ec285\") " pod="openstack/dnsmasq-dns-78dd6ddcc-46hld" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.569454 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53107282-9d7c-4944-9da2-8efd5ef1500d-config\") pod \"dnsmasq-dns-675f4bcbfc-xmjz8\" (UID: \"53107282-9d7c-4944-9da2-8efd5ef1500d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xmjz8" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.569535 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq6zv\" (UniqueName: \"kubernetes.io/projected/c14f1710-5bb5-4333-b584-d5bff01ec285-kube-api-access-fq6zv\") pod \"dnsmasq-dns-78dd6ddcc-46hld\" (UID: \"c14f1710-5bb5-4333-b584-d5bff01ec285\") " pod="openstack/dnsmasq-dns-78dd6ddcc-46hld" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.570831 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53107282-9d7c-4944-9da2-8efd5ef1500d-config\") pod \"dnsmasq-dns-675f4bcbfc-xmjz8\" (UID: \"53107282-9d7c-4944-9da2-8efd5ef1500d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xmjz8" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.597680 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74bcl\" (UniqueName: \"kubernetes.io/projected/53107282-9d7c-4944-9da2-8efd5ef1500d-kube-api-access-74bcl\") pod \"dnsmasq-dns-675f4bcbfc-xmjz8\" (UID: \"53107282-9d7c-4944-9da2-8efd5ef1500d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xmjz8" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.671588 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq6zv\" (UniqueName: \"kubernetes.io/projected/c14f1710-5bb5-4333-b584-d5bff01ec285-kube-api-access-fq6zv\") pod \"dnsmasq-dns-78dd6ddcc-46hld\" (UID: \"c14f1710-5bb5-4333-b584-d5bff01ec285\") " pod="openstack/dnsmasq-dns-78dd6ddcc-46hld" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.671898 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c14f1710-5bb5-4333-b584-d5bff01ec285-config\") pod \"dnsmasq-dns-78dd6ddcc-46hld\" (UID: \"c14f1710-5bb5-4333-b584-d5bff01ec285\") " pod="openstack/dnsmasq-dns-78dd6ddcc-46hld" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.671981 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c14f1710-5bb5-4333-b584-d5bff01ec285-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-46hld\" (UID: \"c14f1710-5bb5-4333-b584-d5bff01ec285\") " pod="openstack/dnsmasq-dns-78dd6ddcc-46hld" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.672820 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c14f1710-5bb5-4333-b584-d5bff01ec285-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-46hld\" (UID: \"c14f1710-5bb5-4333-b584-d5bff01ec285\") " pod="openstack/dnsmasq-dns-78dd6ddcc-46hld" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.673146 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c14f1710-5bb5-4333-b584-d5bff01ec285-config\") pod \"dnsmasq-dns-78dd6ddcc-46hld\" (UID: \"c14f1710-5bb5-4333-b584-d5bff01ec285\") " pod="openstack/dnsmasq-dns-78dd6ddcc-46hld" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.690973 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq6zv\" (UniqueName: \"kubernetes.io/projected/c14f1710-5bb5-4333-b584-d5bff01ec285-kube-api-access-fq6zv\") pod \"dnsmasq-dns-78dd6ddcc-46hld\" (UID: \"c14f1710-5bb5-4333-b584-d5bff01ec285\") " pod="openstack/dnsmasq-dns-78dd6ddcc-46hld" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.701808 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xmjz8" Dec 12 16:10:12 crc kubenswrapper[4693]: I1212 16:10:12.807691 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-46hld" Dec 12 16:10:13 crc kubenswrapper[4693]: I1212 16:10:13.246530 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xmjz8"] Dec 12 16:10:13 crc kubenswrapper[4693]: I1212 16:10:13.260689 4693 generic.go:334] "Generic (PLEG): container finished" podID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerID="fa5a22453d813e4ad162e4fc8b28463dbad032801eec3a25e1c47d7ec02c9b9a" exitCode=0 Dec 12 16:10:13 crc kubenswrapper[4693]: I1212 16:10:13.260848 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerDied","Data":"fa5a22453d813e4ad162e4fc8b28463dbad032801eec3a25e1c47d7ec02c9b9a"} Dec 12 16:10:13 crc kubenswrapper[4693]: I1212 16:10:13.261393 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerStarted","Data":"dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67"} Dec 12 16:10:13 crc kubenswrapper[4693]: I1212 16:10:13.261469 4693 scope.go:117] "RemoveContainer" containerID="9b99609eca8bf887c0f086d452cd1f8437812e8c5e6edb0ab2c3a059f6382847" Dec 12 16:10:13 crc kubenswrapper[4693]: I1212 16:10:13.271553 4693 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 16:10:13 crc kubenswrapper[4693]: I1212 16:10:13.380677 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-46hld"] Dec 12 16:10:13 crc kubenswrapper[4693]: W1212 16:10:13.381919 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc14f1710_5bb5_4333_b584_d5bff01ec285.slice/crio-e86a4cbec584236b9452f75be7af39bac6e10e093513f40103a0b8db9eb95fb2 WatchSource:0}: Error finding container e86a4cbec584236b9452f75be7af39bac6e10e093513f40103a0b8db9eb95fb2: Status 404 returned error can't find the container with id e86a4cbec584236b9452f75be7af39bac6e10e093513f40103a0b8db9eb95fb2 Dec 12 16:10:14 crc kubenswrapper[4693]: I1212 16:10:14.273511 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-xmjz8" event={"ID":"53107282-9d7c-4944-9da2-8efd5ef1500d","Type":"ContainerStarted","Data":"cedfd21b4681fb61164b77df88deb522e975c0125c62d2d34c9eec45de290c75"} Dec 12 16:10:14 crc kubenswrapper[4693]: I1212 16:10:14.282218 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-46hld" event={"ID":"c14f1710-5bb5-4333-b584-d5bff01ec285","Type":"ContainerStarted","Data":"e86a4cbec584236b9452f75be7af39bac6e10e093513f40103a0b8db9eb95fb2"} Dec 12 16:10:15 crc kubenswrapper[4693]: I1212 16:10:15.560333 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xmjz8"] Dec 12 16:10:15 crc kubenswrapper[4693]: I1212 16:10:15.596467 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6cwfg"] Dec 12 16:10:15 crc kubenswrapper[4693]: I1212 16:10:15.598021 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" Dec 12 16:10:15 crc kubenswrapper[4693]: I1212 16:10:15.618801 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6cwfg"] Dec 12 16:10:15 crc kubenswrapper[4693]: I1212 16:10:15.639541 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5827a44-9073-412a-90ec-653b5ac3f5fd-config\") pod \"dnsmasq-dns-666b6646f7-6cwfg\" (UID: \"b5827a44-9073-412a-90ec-653b5ac3f5fd\") " pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" Dec 12 16:10:15 crc kubenswrapper[4693]: I1212 16:10:15.639606 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ps9j\" (UniqueName: \"kubernetes.io/projected/b5827a44-9073-412a-90ec-653b5ac3f5fd-kube-api-access-5ps9j\") pod \"dnsmasq-dns-666b6646f7-6cwfg\" (UID: \"b5827a44-9073-412a-90ec-653b5ac3f5fd\") " pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" Dec 12 16:10:15 crc kubenswrapper[4693]: I1212 16:10:15.639663 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5827a44-9073-412a-90ec-653b5ac3f5fd-dns-svc\") pod \"dnsmasq-dns-666b6646f7-6cwfg\" (UID: \"b5827a44-9073-412a-90ec-653b5ac3f5fd\") " pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" Dec 12 16:10:15 crc kubenswrapper[4693]: I1212 16:10:15.749697 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5827a44-9073-412a-90ec-653b5ac3f5fd-config\") pod \"dnsmasq-dns-666b6646f7-6cwfg\" (UID: \"b5827a44-9073-412a-90ec-653b5ac3f5fd\") " pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" Dec 12 16:10:15 crc kubenswrapper[4693]: I1212 16:10:15.749785 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ps9j\" (UniqueName: \"kubernetes.io/projected/b5827a44-9073-412a-90ec-653b5ac3f5fd-kube-api-access-5ps9j\") pod \"dnsmasq-dns-666b6646f7-6cwfg\" (UID: \"b5827a44-9073-412a-90ec-653b5ac3f5fd\") " pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" Dec 12 16:10:15 crc kubenswrapper[4693]: I1212 16:10:15.749840 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5827a44-9073-412a-90ec-653b5ac3f5fd-dns-svc\") pod \"dnsmasq-dns-666b6646f7-6cwfg\" (UID: \"b5827a44-9073-412a-90ec-653b5ac3f5fd\") " pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" Dec 12 16:10:15 crc kubenswrapper[4693]: I1212 16:10:15.750869 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5827a44-9073-412a-90ec-653b5ac3f5fd-dns-svc\") pod \"dnsmasq-dns-666b6646f7-6cwfg\" (UID: \"b5827a44-9073-412a-90ec-653b5ac3f5fd\") " pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" Dec 12 16:10:15 crc kubenswrapper[4693]: I1212 16:10:15.751602 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5827a44-9073-412a-90ec-653b5ac3f5fd-config\") pod \"dnsmasq-dns-666b6646f7-6cwfg\" (UID: \"b5827a44-9073-412a-90ec-653b5ac3f5fd\") " pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" Dec 12 16:10:15 crc kubenswrapper[4693]: I1212 16:10:15.783020 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ps9j\" (UniqueName: \"kubernetes.io/projected/b5827a44-9073-412a-90ec-653b5ac3f5fd-kube-api-access-5ps9j\") pod \"dnsmasq-dns-666b6646f7-6cwfg\" (UID: \"b5827a44-9073-412a-90ec-653b5ac3f5fd\") " pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" Dec 12 16:10:15 crc kubenswrapper[4693]: I1212 16:10:15.934326 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" Dec 12 16:10:15 crc kubenswrapper[4693]: I1212 16:10:15.989648 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-46hld"] Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.043381 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4l2q9"] Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.048248 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.061749 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4l2q9"] Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.158979 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/917b9605-32a3-4e61-9127-aff641344aa3-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4l2q9\" (UID: \"917b9605-32a3-4e61-9127-aff641344aa3\") " pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.159072 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/917b9605-32a3-4e61-9127-aff641344aa3-config\") pod \"dnsmasq-dns-57d769cc4f-4l2q9\" (UID: \"917b9605-32a3-4e61-9127-aff641344aa3\") " pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.159102 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xws2l\" (UniqueName: \"kubernetes.io/projected/917b9605-32a3-4e61-9127-aff641344aa3-kube-api-access-xws2l\") pod \"dnsmasq-dns-57d769cc4f-4l2q9\" (UID: \"917b9605-32a3-4e61-9127-aff641344aa3\") " pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.260435 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/917b9605-32a3-4e61-9127-aff641344aa3-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4l2q9\" (UID: \"917b9605-32a3-4e61-9127-aff641344aa3\") " pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.260566 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/917b9605-32a3-4e61-9127-aff641344aa3-config\") pod \"dnsmasq-dns-57d769cc4f-4l2q9\" (UID: \"917b9605-32a3-4e61-9127-aff641344aa3\") " pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.260592 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xws2l\" (UniqueName: \"kubernetes.io/projected/917b9605-32a3-4e61-9127-aff641344aa3-kube-api-access-xws2l\") pod \"dnsmasq-dns-57d769cc4f-4l2q9\" (UID: \"917b9605-32a3-4e61-9127-aff641344aa3\") " pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.262956 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/917b9605-32a3-4e61-9127-aff641344aa3-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4l2q9\" (UID: \"917b9605-32a3-4e61-9127-aff641344aa3\") " pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.271740 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/917b9605-32a3-4e61-9127-aff641344aa3-config\") pod \"dnsmasq-dns-57d769cc4f-4l2q9\" (UID: \"917b9605-32a3-4e61-9127-aff641344aa3\") " pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.300257 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xws2l\" (UniqueName: \"kubernetes.io/projected/917b9605-32a3-4e61-9127-aff641344aa3-kube-api-access-xws2l\") pod \"dnsmasq-dns-57d769cc4f-4l2q9\" (UID: \"917b9605-32a3-4e61-9127-aff641344aa3\") " pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.433590 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.639913 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6cwfg"] Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.822302 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.824003 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.831581 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.831597 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.831757 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.831807 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.831837 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.831757 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-d24fp" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.831969 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.835743 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.837511 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.864560 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.866600 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.876498 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.886092 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.899061 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.956227 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4l2q9"] Dec 12 16:10:16 crc kubenswrapper[4693]: W1212 16:10:16.970586 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod917b9605_32a3_4e61_9127_aff641344aa3.slice/crio-46280df7afc0dc8cdaa80b68f62f836a2f72250a27e46e2af8de3aec48f9d31c WatchSource:0}: Error finding container 46280df7afc0dc8cdaa80b68f62f836a2f72250a27e46e2af8de3aec48f9d31c: Status 404 returned error can't find the container with id 46280df7afc0dc8cdaa80b68f62f836a2f72250a27e46e2af8de3aec48f9d31c Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.983701 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d45363e2-3684-4fc6-b322-d99e6e87d3fd-config-data\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.983742 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.983774 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/62a37a53-6f53-4b51-b493-edfdb42c3a93-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.983792 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.983810 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d45363e2-3684-4fc6-b322-d99e6e87d3fd-server-conf\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.983827 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/62a37a53-6f53-4b51-b493-edfdb42c3a93-server-conf\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.983870 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.983895 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9mdk\" (UniqueName: \"kubernetes.io/projected/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-kube-api-access-m9mdk\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.983911 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62a37a53-6f53-4b51-b493-edfdb42c3a93-config-data\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.984064 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.984128 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.984175 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.984206 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/62a37a53-6f53-4b51-b493-edfdb42c3a93-pod-info\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.984357 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.984429 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.984481 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qqft\" (UniqueName: \"kubernetes.io/projected/d45363e2-3684-4fc6-b322-d99e6e87d3fd-kube-api-access-8qqft\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.984504 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.984536 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/62a37a53-6f53-4b51-b493-edfdb42c3a93-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.984630 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.984668 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-acab46b2-5b55-4ecf-895b-93f481c6d063\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acab46b2-5b55-4ecf-895b-93f481c6d063\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.985132 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.985187 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.985251 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.985302 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d45363e2-3684-4fc6-b322-d99e6e87d3fd-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.985401 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.985436 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-config-data\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.985485 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.985517 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d45363e2-3684-4fc6-b322-d99e6e87d3fd-pod-info\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.985560 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d45363e2-3684-4fc6-b322-d99e6e87d3fd-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.985624 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.985662 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c30820e7-7bd6-46ee-92cd-615319618f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c30820e7-7bd6-46ee-92cd-615319618f91\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.985697 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72fpp\" (UniqueName: \"kubernetes.io/projected/62a37a53-6f53-4b51-b493-edfdb42c3a93-kube-api-access-72fpp\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:16 crc kubenswrapper[4693]: I1212 16:10:16.985720 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.087776 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qqft\" (UniqueName: \"kubernetes.io/projected/d45363e2-3684-4fc6-b322-d99e6e87d3fd-kube-api-access-8qqft\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.087837 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.087864 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/62a37a53-6f53-4b51-b493-edfdb42c3a93-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.087900 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.087926 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-acab46b2-5b55-4ecf-895b-93f481c6d063\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acab46b2-5b55-4ecf-895b-93f481c6d063\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.087957 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.087993 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088025 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088051 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d45363e2-3684-4fc6-b322-d99e6e87d3fd-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088095 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088122 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-config-data\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088155 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088180 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d45363e2-3684-4fc6-b322-d99e6e87d3fd-pod-info\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088207 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d45363e2-3684-4fc6-b322-d99e6e87d3fd-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088243 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088287 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c30820e7-7bd6-46ee-92cd-615319618f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c30820e7-7bd6-46ee-92cd-615319618f91\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088315 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72fpp\" (UniqueName: \"kubernetes.io/projected/62a37a53-6f53-4b51-b493-edfdb42c3a93-kube-api-access-72fpp\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088337 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088377 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d45363e2-3684-4fc6-b322-d99e6e87d3fd-config-data\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088403 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088434 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/62a37a53-6f53-4b51-b493-edfdb42c3a93-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088456 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088477 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d45363e2-3684-4fc6-b322-d99e6e87d3fd-server-conf\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088516 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/62a37a53-6f53-4b51-b493-edfdb42c3a93-server-conf\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088549 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088587 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9mdk\" (UniqueName: \"kubernetes.io/projected/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-kube-api-access-m9mdk\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088610 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62a37a53-6f53-4b51-b493-edfdb42c3a93-config-data\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088637 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088661 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088696 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088718 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/62a37a53-6f53-4b51-b493-edfdb42c3a93-pod-info\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088743 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.088776 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.089193 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.089312 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/62a37a53-6f53-4b51-b493-edfdb42c3a93-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.089589 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.090721 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.091009 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.091047 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.091263 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.092243 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62a37a53-6f53-4b51-b493-edfdb42c3a93-config-data\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.093386 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d45363e2-3684-4fc6-b322-d99e6e87d3fd-server-conf\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.093649 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.095117 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/62a37a53-6f53-4b51-b493-edfdb42c3a93-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.095248 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.098455 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.098700 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-config-data\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.099506 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d45363e2-3684-4fc6-b322-d99e6e87d3fd-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.100701 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d45363e2-3684-4fc6-b322-d99e6e87d3fd-config-data\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.101444 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.101536 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.101572 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-acab46b2-5b55-4ecf-895b-93f481c6d063\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acab46b2-5b55-4ecf-895b-93f481c6d063\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7659dab294232f1341c4bc34db4715b7842ffd31c77cab48e5c6a75713a05aea/globalmount\"" pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.109492 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.109539 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c30820e7-7bd6-46ee-92cd-615319618f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c30820e7-7bd6-46ee-92cd-615319618f91\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/230e84eca29ef978c6e938c2248beafe68d1fc4f5fdf1e28b05ba9d43b4abe39/globalmount\"" pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.113469 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/62a37a53-6f53-4b51-b493-edfdb42c3a93-server-conf\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.111072 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.113659 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.114168 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d45363e2-3684-4fc6-b322-d99e6e87d3fd-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.116662 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d45363e2-3684-4fc6-b322-d99e6e87d3fd-pod-info\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.122705 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.124995 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9mdk\" (UniqueName: \"kubernetes.io/projected/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-kube-api-access-m9mdk\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.125003 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.125520 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.125996 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.126105 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5730403f2a641c2d36c77eefd13e83fe4cbfb23ef325a0e0333962c172190de6/globalmount\"" pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.134715 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72fpp\" (UniqueName: \"kubernetes.io/projected/62a37a53-6f53-4b51-b493-edfdb42c3a93-kube-api-access-72fpp\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.135363 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.137504 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/62a37a53-6f53-4b51-b493-edfdb42c3a93-pod-info\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.139930 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qqft\" (UniqueName: \"kubernetes.io/projected/d45363e2-3684-4fc6-b322-d99e6e87d3fd-kube-api-access-8qqft\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.172023 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.173963 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.183382 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.183497 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2s7vz" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.183684 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.183745 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.183392 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.183783 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.184353 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.194037 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.242504 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-acab46b2-5b55-4ecf-895b-93f481c6d063\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acab46b2-5b55-4ecf-895b-93f481c6d063\") pod \"rabbitmq-server-1\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.251953 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74\") pod \"rabbitmq-server-2\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.294423 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tjgz\" (UniqueName: \"kubernetes.io/projected/6fd6556d-68c5-4492-804c-bc3188ab39b7-kube-api-access-7tjgz\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.294480 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fd6556d-68c5-4492-804c-bc3188ab39b7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.294511 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a732b9b8-db5d-4a31-98e5-2cac61707982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a732b9b8-db5d-4a31-98e5-2cac61707982\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.294576 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6fd6556d-68c5-4492-804c-bc3188ab39b7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.294610 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6fd6556d-68c5-4492-804c-bc3188ab39b7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.294667 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.294692 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.294712 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6fd6556d-68c5-4492-804c-bc3188ab39b7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.294741 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.294807 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6fd6556d-68c5-4492-804c-bc3188ab39b7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.294851 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.405952 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.406058 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6fd6556d-68c5-4492-804c-bc3188ab39b7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.406147 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.406229 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tjgz\" (UniqueName: \"kubernetes.io/projected/6fd6556d-68c5-4492-804c-bc3188ab39b7-kube-api-access-7tjgz\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.406292 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fd6556d-68c5-4492-804c-bc3188ab39b7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.406359 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a732b9b8-db5d-4a31-98e5-2cac61707982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a732b9b8-db5d-4a31-98e5-2cac61707982\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.406488 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6fd6556d-68c5-4492-804c-bc3188ab39b7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.406545 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6fd6556d-68c5-4492-804c-bc3188ab39b7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.406665 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.406701 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.406736 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6fd6556d-68c5-4492-804c-bc3188ab39b7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.409491 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fd6556d-68c5-4492-804c-bc3188ab39b7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.409847 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.410917 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6fd6556d-68c5-4492-804c-bc3188ab39b7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.411188 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.422030 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.423728 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6fd6556d-68c5-4492-804c-bc3188ab39b7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.439245 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.439555 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a732b9b8-db5d-4a31-98e5-2cac61707982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a732b9b8-db5d-4a31-98e5-2cac61707982\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4cf1f32a83213719dc85bfec44caf9d44a7960a8ddbf15eb937fd4eb898307df/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.451587 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" event={"ID":"b5827a44-9073-412a-90ec-653b5ac3f5fd","Type":"ContainerStarted","Data":"e2024a156dca85df3754725ff427c2d1e4ec7f6c85d942da702ed7961c0f1438"} Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.453684 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" event={"ID":"917b9605-32a3-4e61-9127-aff641344aa3","Type":"ContainerStarted","Data":"46280df7afc0dc8cdaa80b68f62f836a2f72250a27e46e2af8de3aec48f9d31c"} Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.454599 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6fd6556d-68c5-4492-804c-bc3188ab39b7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.464175 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tjgz\" (UniqueName: \"kubernetes.io/projected/6fd6556d-68c5-4492-804c-bc3188ab39b7-kube-api-access-7tjgz\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.468144 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.469923 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6fd6556d-68c5-4492-804c-bc3188ab39b7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.496124 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c30820e7-7bd6-46ee-92cd-615319618f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c30820e7-7bd6-46ee-92cd-615319618f91\") pod \"rabbitmq-server-0\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.506484 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.515962 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.571345 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a732b9b8-db5d-4a31-98e5-2cac61707982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a732b9b8-db5d-4a31-98e5-2cac61707982\") pod \"rabbitmq-cell1-server-0\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.787514 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 12 16:10:17 crc kubenswrapper[4693]: I1212 16:10:17.863563 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.134423 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Dec 12 16:10:18 crc kubenswrapper[4693]: W1212 16:10:18.178177 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd45363e2_3684_4fc6_b322_d99e6e87d3fd.slice/crio-9186aa32cbfd0a90bdca08a6a278274366680a126a12d85a1784b15934429b6a WatchSource:0}: Error finding container 9186aa32cbfd0a90bdca08a6a278274366680a126a12d85a1784b15934429b6a: Status 404 returned error can't find the container with id 9186aa32cbfd0a90bdca08a6a278274366680a126a12d85a1784b15934429b6a Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.257585 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.395293 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 12 16:10:18 crc kubenswrapper[4693]: W1212 16:10:18.399570 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fd6556d_68c5_4492_804c_bc3188ab39b7.slice/crio-ce5a06b4ffee453d276eda4625c2a45632134402fadc573ef2ed9fc1f895544e WatchSource:0}: Error finding container ce5a06b4ffee453d276eda4625c2a45632134402fadc573ef2ed9fc1f895544e: Status 404 returned error can't find the container with id ce5a06b4ffee453d276eda4625c2a45632134402fadc573ef2ed9fc1f895544e Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.502475 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"62a37a53-6f53-4b51-b493-edfdb42c3a93","Type":"ContainerStarted","Data":"c6090bf6f211e1ed5df5a86159a126cefedd762fd8c7b221d39e99193bf47586"} Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.508839 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.510555 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.516169 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"d45363e2-3684-4fc6-b322-d99e6e87d3fd","Type":"ContainerStarted","Data":"9186aa32cbfd0a90bdca08a6a278274366680a126a12d85a1784b15934429b6a"} Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.516355 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-4vd26" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.516527 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.517355 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.517457 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.522219 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.540384 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.544873 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6fd6556d-68c5-4492-804c-bc3188ab39b7","Type":"ContainerStarted","Data":"ce5a06b4ffee453d276eda4625c2a45632134402fadc573ef2ed9fc1f895544e"} Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.553739 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a9471a93-6eb1-4d31-a635-0da6af00c25a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9471a93-6eb1-4d31-a635-0da6af00c25a\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.554051 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73559c8b-d017-4a5d-aced-3da25d264b0a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.554247 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/73559c8b-d017-4a5d-aced-3da25d264b0a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.554427 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/73559c8b-d017-4a5d-aced-3da25d264b0a-kolla-config\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.554557 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/73559c8b-d017-4a5d-aced-3da25d264b0a-config-data-default\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.554778 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsn2h\" (UniqueName: \"kubernetes.io/projected/73559c8b-d017-4a5d-aced-3da25d264b0a-kube-api-access-fsn2h\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.554895 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73559c8b-d017-4a5d-aced-3da25d264b0a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.555001 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/73559c8b-d017-4a5d-aced-3da25d264b0a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.561318 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 12 16:10:18 crc kubenswrapper[4693]: W1212 16:10:18.572974 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d1046a8_e83f_4c4f_8ac3_1110bb6f62db.slice/crio-13698531da397467abb80465328e33597291eb7bbb89a23658ed1bdf255e7dbe WatchSource:0}: Error finding container 13698531da397467abb80465328e33597291eb7bbb89a23658ed1bdf255e7dbe: Status 404 returned error can't find the container with id 13698531da397467abb80465328e33597291eb7bbb89a23658ed1bdf255e7dbe Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.662249 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73559c8b-d017-4a5d-aced-3da25d264b0a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.662396 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/73559c8b-d017-4a5d-aced-3da25d264b0a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.662462 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/73559c8b-d017-4a5d-aced-3da25d264b0a-kolla-config\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.662530 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/73559c8b-d017-4a5d-aced-3da25d264b0a-config-data-default\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.662624 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsn2h\" (UniqueName: \"kubernetes.io/projected/73559c8b-d017-4a5d-aced-3da25d264b0a-kube-api-access-fsn2h\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.662647 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/73559c8b-d017-4a5d-aced-3da25d264b0a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.662668 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73559c8b-d017-4a5d-aced-3da25d264b0a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.662737 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a9471a93-6eb1-4d31-a635-0da6af00c25a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9471a93-6eb1-4d31-a635-0da6af00c25a\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.663991 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73559c8b-d017-4a5d-aced-3da25d264b0a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.664014 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/73559c8b-d017-4a5d-aced-3da25d264b0a-config-data-default\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.664368 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/73559c8b-d017-4a5d-aced-3da25d264b0a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.664587 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/73559c8b-d017-4a5d-aced-3da25d264b0a-kolla-config\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.669043 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.669083 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a9471a93-6eb1-4d31-a635-0da6af00c25a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9471a93-6eb1-4d31-a635-0da6af00c25a\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ad62ec2bbe22b4da9ec2ae20350bbf32cf813b9fb38ff8634ad95132f43cd4d1/globalmount\"" pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.670021 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/73559c8b-d017-4a5d-aced-3da25d264b0a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.672464 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73559c8b-d017-4a5d-aced-3da25d264b0a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.682794 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsn2h\" (UniqueName: \"kubernetes.io/projected/73559c8b-d017-4a5d-aced-3da25d264b0a-kube-api-access-fsn2h\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.726117 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a9471a93-6eb1-4d31-a635-0da6af00c25a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9471a93-6eb1-4d31-a635-0da6af00c25a\") pod \"openstack-galera-0\" (UID: \"73559c8b-d017-4a5d-aced-3da25d264b0a\") " pod="openstack/openstack-galera-0" Dec 12 16:10:18 crc kubenswrapper[4693]: I1212 16:10:18.833128 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 12 16:10:19 crc kubenswrapper[4693]: I1212 16:10:19.460380 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 12 16:10:19 crc kubenswrapper[4693]: W1212 16:10:19.480578 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73559c8b_d017_4a5d_aced_3da25d264b0a.slice/crio-d85a32dc13c8068b98e40657d31e78875553e15aab74b1931fd861b9be3b002f WatchSource:0}: Error finding container d85a32dc13c8068b98e40657d31e78875553e15aab74b1931fd861b9be3b002f: Status 404 returned error can't find the container with id d85a32dc13c8068b98e40657d31e78875553e15aab74b1931fd861b9be3b002f Dec 12 16:10:19 crc kubenswrapper[4693]: I1212 16:10:19.574904 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db","Type":"ContainerStarted","Data":"13698531da397467abb80465328e33597291eb7bbb89a23658ed1bdf255e7dbe"} Dec 12 16:10:19 crc kubenswrapper[4693]: I1212 16:10:19.580453 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"73559c8b-d017-4a5d-aced-3da25d264b0a","Type":"ContainerStarted","Data":"d85a32dc13c8068b98e40657d31e78875553e15aab74b1931fd861b9be3b002f"} Dec 12 16:10:19 crc kubenswrapper[4693]: I1212 16:10:19.778197 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 12 16:10:19 crc kubenswrapper[4693]: I1212 16:10:19.780766 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:19 crc kubenswrapper[4693]: I1212 16:10:19.786330 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 12 16:10:19 crc kubenswrapper[4693]: I1212 16:10:19.786463 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 12 16:10:19 crc kubenswrapper[4693]: I1212 16:10:19.787516 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-9xtvg" Dec 12 16:10:19 crc kubenswrapper[4693]: I1212 16:10:19.787617 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 12 16:10:19 crc kubenswrapper[4693]: I1212 16:10:19.788696 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 12 16:10:19 crc kubenswrapper[4693]: I1212 16:10:19.898953 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-46a5b3ea-a506-496d-8be9-5b110ca26bc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46a5b3ea-a506-496d-8be9-5b110ca26bc9\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:19 crc kubenswrapper[4693]: I1212 16:10:19.899011 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ec693a73-a415-42a1-98f4-86438aa58d56-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:19 crc kubenswrapper[4693]: I1212 16:10:19.899092 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec693a73-a415-42a1-98f4-86438aa58d56-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:19 crc kubenswrapper[4693]: I1212 16:10:19.899207 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec693a73-a415-42a1-98f4-86438aa58d56-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:19 crc kubenswrapper[4693]: I1212 16:10:19.899297 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ec693a73-a415-42a1-98f4-86438aa58d56-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:19 crc kubenswrapper[4693]: I1212 16:10:19.899370 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ec693a73-a415-42a1-98f4-86438aa58d56-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:19 crc kubenswrapper[4693]: I1212 16:10:19.899397 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5vg4\" (UniqueName: \"kubernetes.io/projected/ec693a73-a415-42a1-98f4-86438aa58d56-kube-api-access-j5vg4\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:19 crc kubenswrapper[4693]: I1212 16:10:19.899524 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec693a73-a415-42a1-98f4-86438aa58d56-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.001176 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ec693a73-a415-42a1-98f4-86438aa58d56-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.001239 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ec693a73-a415-42a1-98f4-86438aa58d56-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.001259 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5vg4\" (UniqueName: \"kubernetes.io/projected/ec693a73-a415-42a1-98f4-86438aa58d56-kube-api-access-j5vg4\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.001419 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec693a73-a415-42a1-98f4-86438aa58d56-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.001511 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-46a5b3ea-a506-496d-8be9-5b110ca26bc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46a5b3ea-a506-496d-8be9-5b110ca26bc9\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.001533 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ec693a73-a415-42a1-98f4-86438aa58d56-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.001568 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec693a73-a415-42a1-98f4-86438aa58d56-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.001595 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec693a73-a415-42a1-98f4-86438aa58d56-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.002203 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ec693a73-a415-42a1-98f4-86438aa58d56-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.002373 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ec693a73-a415-42a1-98f4-86438aa58d56-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.003241 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ec693a73-a415-42a1-98f4-86438aa58d56-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.003241 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec693a73-a415-42a1-98f4-86438aa58d56-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.026109 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.026187 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-46a5b3ea-a506-496d-8be9-5b110ca26bc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46a5b3ea-a506-496d-8be9-5b110ca26bc9\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c199aa14082619f00be05ead72bceafcd3d0590942c6fbe3f60564999256d5a4/globalmount\"" pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.026110 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec693a73-a415-42a1-98f4-86438aa58d56-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.028063 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec693a73-a415-42a1-98f4-86438aa58d56-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.029882 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5vg4\" (UniqueName: \"kubernetes.io/projected/ec693a73-a415-42a1-98f4-86438aa58d56-kube-api-access-j5vg4\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.036005 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.037765 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.043113 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.043300 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.043421 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-fns7m" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.049349 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.109310 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f5c0331-ca61-42ce-989a-499b5eb81bc9-kolla-config\") pod \"memcached-0\" (UID: \"8f5c0331-ca61-42ce-989a-499b5eb81bc9\") " pod="openstack/memcached-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.112054 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt8qp\" (UniqueName: \"kubernetes.io/projected/8f5c0331-ca61-42ce-989a-499b5eb81bc9-kube-api-access-xt8qp\") pod \"memcached-0\" (UID: \"8f5c0331-ca61-42ce-989a-499b5eb81bc9\") " pod="openstack/memcached-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.112169 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f5c0331-ca61-42ce-989a-499b5eb81bc9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8f5c0331-ca61-42ce-989a-499b5eb81bc9\") " pod="openstack/memcached-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.119966 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f5c0331-ca61-42ce-989a-499b5eb81bc9-config-data\") pod \"memcached-0\" (UID: \"8f5c0331-ca61-42ce-989a-499b5eb81bc9\") " pod="openstack/memcached-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.120077 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5c0331-ca61-42ce-989a-499b5eb81bc9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8f5c0331-ca61-42ce-989a-499b5eb81bc9\") " pod="openstack/memcached-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.174477 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-46a5b3ea-a506-496d-8be9-5b110ca26bc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46a5b3ea-a506-496d-8be9-5b110ca26bc9\") pod \"openstack-cell1-galera-0\" (UID: \"ec693a73-a415-42a1-98f4-86438aa58d56\") " pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.221598 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5c0331-ca61-42ce-989a-499b5eb81bc9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8f5c0331-ca61-42ce-989a-499b5eb81bc9\") " pod="openstack/memcached-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.221911 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f5c0331-ca61-42ce-989a-499b5eb81bc9-kolla-config\") pod \"memcached-0\" (UID: \"8f5c0331-ca61-42ce-989a-499b5eb81bc9\") " pod="openstack/memcached-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.222130 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt8qp\" (UniqueName: \"kubernetes.io/projected/8f5c0331-ca61-42ce-989a-499b5eb81bc9-kube-api-access-xt8qp\") pod \"memcached-0\" (UID: \"8f5c0331-ca61-42ce-989a-499b5eb81bc9\") " pod="openstack/memcached-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.222321 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f5c0331-ca61-42ce-989a-499b5eb81bc9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8f5c0331-ca61-42ce-989a-499b5eb81bc9\") " pod="openstack/memcached-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.222512 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f5c0331-ca61-42ce-989a-499b5eb81bc9-config-data\") pod \"memcached-0\" (UID: \"8f5c0331-ca61-42ce-989a-499b5eb81bc9\") " pod="openstack/memcached-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.223951 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f5c0331-ca61-42ce-989a-499b5eb81bc9-config-data\") pod \"memcached-0\" (UID: \"8f5c0331-ca61-42ce-989a-499b5eb81bc9\") " pod="openstack/memcached-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.224799 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f5c0331-ca61-42ce-989a-499b5eb81bc9-kolla-config\") pod \"memcached-0\" (UID: \"8f5c0331-ca61-42ce-989a-499b5eb81bc9\") " pod="openstack/memcached-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.230488 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f5c0331-ca61-42ce-989a-499b5eb81bc9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8f5c0331-ca61-42ce-989a-499b5eb81bc9\") " pod="openstack/memcached-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.232954 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5c0331-ca61-42ce-989a-499b5eb81bc9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8f5c0331-ca61-42ce-989a-499b5eb81bc9\") " pod="openstack/memcached-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.242847 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt8qp\" (UniqueName: \"kubernetes.io/projected/8f5c0331-ca61-42ce-989a-499b5eb81bc9-kube-api-access-xt8qp\") pod \"memcached-0\" (UID: \"8f5c0331-ca61-42ce-989a-499b5eb81bc9\") " pod="openstack/memcached-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.428214 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.450510 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 12 16:10:20 crc kubenswrapper[4693]: I1212 16:10:20.977885 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 12 16:10:21 crc kubenswrapper[4693]: I1212 16:10:21.101907 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 12 16:10:21 crc kubenswrapper[4693]: I1212 16:10:21.612305 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ec693a73-a415-42a1-98f4-86438aa58d56","Type":"ContainerStarted","Data":"93acbcc926bc0fc0c8d8d728b16cc422bfd75beeee2bf29b17c7fbe7a0aed839"} Dec 12 16:10:21 crc kubenswrapper[4693]: I1212 16:10:21.614046 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8f5c0331-ca61-42ce-989a-499b5eb81bc9","Type":"ContainerStarted","Data":"af9b0781b73b4bcb40a8119e7569ac2d6a11e8486dfc4097286cdb4655f4ad42"} Dec 12 16:10:22 crc kubenswrapper[4693]: I1212 16:10:22.679569 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 12 16:10:22 crc kubenswrapper[4693]: I1212 16:10:22.681179 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 12 16:10:22 crc kubenswrapper[4693]: I1212 16:10:22.685840 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-j6s76" Dec 12 16:10:22 crc kubenswrapper[4693]: I1212 16:10:22.738469 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 12 16:10:22 crc kubenswrapper[4693]: I1212 16:10:22.798667 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn2lh\" (UniqueName: \"kubernetes.io/projected/0d8c08a3-7925-40d9-afbc-76755b6c0263-kube-api-access-pn2lh\") pod \"kube-state-metrics-0\" (UID: \"0d8c08a3-7925-40d9-afbc-76755b6c0263\") " pod="openstack/kube-state-metrics-0" Dec 12 16:10:22 crc kubenswrapper[4693]: I1212 16:10:22.902487 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn2lh\" (UniqueName: \"kubernetes.io/projected/0d8c08a3-7925-40d9-afbc-76755b6c0263-kube-api-access-pn2lh\") pod \"kube-state-metrics-0\" (UID: \"0d8c08a3-7925-40d9-afbc-76755b6c0263\") " pod="openstack/kube-state-metrics-0" Dec 12 16:10:22 crc kubenswrapper[4693]: I1212 16:10:22.947602 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn2lh\" (UniqueName: \"kubernetes.io/projected/0d8c08a3-7925-40d9-afbc-76755b6c0263-kube-api-access-pn2lh\") pod \"kube-state-metrics-0\" (UID: \"0d8c08a3-7925-40d9-afbc-76755b6c0263\") " pod="openstack/kube-state-metrics-0" Dec 12 16:10:23 crc kubenswrapper[4693]: I1212 16:10:23.050892 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 12 16:10:23 crc kubenswrapper[4693]: I1212 16:10:23.589863 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-7mr5f"] Dec 12 16:10:23 crc kubenswrapper[4693]: I1212 16:10:23.596975 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-7mr5f" Dec 12 16:10:23 crc kubenswrapper[4693]: I1212 16:10:23.610009 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-nv6hr" Dec 12 16:10:23 crc kubenswrapper[4693]: I1212 16:10:23.612489 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Dec 12 16:10:23 crc kubenswrapper[4693]: I1212 16:10:23.620522 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-7mr5f"] Dec 12 16:10:23 crc kubenswrapper[4693]: I1212 16:10:23.744147 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnqdk\" (UniqueName: \"kubernetes.io/projected/e57a8f08-dbcd-439a-9d89-d11a3bf7f33b-kube-api-access-pnqdk\") pod \"observability-ui-dashboards-7d5fb4cbfb-7mr5f\" (UID: \"e57a8f08-dbcd-439a-9d89-d11a3bf7f33b\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-7mr5f" Dec 12 16:10:23 crc kubenswrapper[4693]: I1212 16:10:23.744249 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e57a8f08-dbcd-439a-9d89-d11a3bf7f33b-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-7mr5f\" (UID: \"e57a8f08-dbcd-439a-9d89-d11a3bf7f33b\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-7mr5f" Dec 12 16:10:23 crc kubenswrapper[4693]: I1212 16:10:23.846561 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnqdk\" (UniqueName: \"kubernetes.io/projected/e57a8f08-dbcd-439a-9d89-d11a3bf7f33b-kube-api-access-pnqdk\") pod \"observability-ui-dashboards-7d5fb4cbfb-7mr5f\" (UID: \"e57a8f08-dbcd-439a-9d89-d11a3bf7f33b\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-7mr5f" Dec 12 16:10:23 crc kubenswrapper[4693]: I1212 16:10:23.846696 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e57a8f08-dbcd-439a-9d89-d11a3bf7f33b-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-7mr5f\" (UID: \"e57a8f08-dbcd-439a-9d89-d11a3bf7f33b\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-7mr5f" Dec 12 16:10:23 crc kubenswrapper[4693]: E1212 16:10:23.846848 4693 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Dec 12 16:10:23 crc kubenswrapper[4693]: E1212 16:10:23.846910 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e57a8f08-dbcd-439a-9d89-d11a3bf7f33b-serving-cert podName:e57a8f08-dbcd-439a-9d89-d11a3bf7f33b nodeName:}" failed. No retries permitted until 2025-12-12 16:10:24.346889102 +0000 UTC m=+1451.515528703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e57a8f08-dbcd-439a-9d89-d11a3bf7f33b-serving-cert") pod "observability-ui-dashboards-7d5fb4cbfb-7mr5f" (UID: "e57a8f08-dbcd-439a-9d89-d11a3bf7f33b") : secret "observability-ui-dashboards" not found Dec 12 16:10:23 crc kubenswrapper[4693]: I1212 16:10:23.900334 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnqdk\" (UniqueName: \"kubernetes.io/projected/e57a8f08-dbcd-439a-9d89-d11a3bf7f33b-kube-api-access-pnqdk\") pod \"observability-ui-dashboards-7d5fb4cbfb-7mr5f\" (UID: \"e57a8f08-dbcd-439a-9d89-d11a3bf7f33b\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-7mr5f" Dec 12 16:10:23 crc kubenswrapper[4693]: I1212 16:10:23.937264 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 12 16:10:23 crc kubenswrapper[4693]: I1212 16:10:23.939584 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:23 crc kubenswrapper[4693]: I1212 16:10:23.952110 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 12 16:10:23 crc kubenswrapper[4693]: I1212 16:10:23.952318 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 12 16:10:23 crc kubenswrapper[4693]: I1212 16:10:23.952912 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 12 16:10:23 crc kubenswrapper[4693]: I1212 16:10:23.953039 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 12 16:10:23 crc kubenswrapper[4693]: I1212 16:10:23.953810 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-knjnx" Dec 12 16:10:23 crc kubenswrapper[4693]: I1212 16:10:23.960076 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.008260 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.054500 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-65dd6d4bcb-h8fs2"] Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.055480 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3fe5a970-1de1-4166-815b-81097dfe20ce-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.055520 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3fe5a970-1de1-4166-815b-81097dfe20ce-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.055562 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3fe5a970-1de1-4166-815b-81097dfe20ce-config\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.055583 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jcx6\" (UniqueName: \"kubernetes.io/projected/3fe5a970-1de1-4166-815b-81097dfe20ce-kube-api-access-8jcx6\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.055666 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3fe5a970-1de1-4166-815b-81097dfe20ce-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.055686 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.055701 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3fe5a970-1de1-4166-815b-81097dfe20ce-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.055742 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3fe5a970-1de1-4166-815b-81097dfe20ce-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.055842 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.081603 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65dd6d4bcb-h8fs2"] Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.157970 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3fe5a970-1de1-4166-815b-81097dfe20ce-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.158069 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3fe5a970-1de1-4166-815b-81097dfe20ce-config\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.158108 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jcx6\" (UniqueName: \"kubernetes.io/projected/3fe5a970-1de1-4166-815b-81097dfe20ce-kube-api-access-8jcx6\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.158148 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac10c353-ed34-4f82-ad22-dc0065fbb96e-console-oauth-config\") pod \"console-65dd6d4bcb-h8fs2\" (UID: \"ac10c353-ed34-4f82-ad22-dc0065fbb96e\") " pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.158174 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac10c353-ed34-4f82-ad22-dc0065fbb96e-console-config\") pod \"console-65dd6d4bcb-h8fs2\" (UID: \"ac10c353-ed34-4f82-ad22-dc0065fbb96e\") " pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.158209 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac10c353-ed34-4f82-ad22-dc0065fbb96e-trusted-ca-bundle\") pod \"console-65dd6d4bcb-h8fs2\" (UID: \"ac10c353-ed34-4f82-ad22-dc0065fbb96e\") " pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.158314 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac10c353-ed34-4f82-ad22-dc0065fbb96e-service-ca\") pod \"console-65dd6d4bcb-h8fs2\" (UID: \"ac10c353-ed34-4f82-ad22-dc0065fbb96e\") " pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.158348 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3fe5a970-1de1-4166-815b-81097dfe20ce-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.158378 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.158483 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3fe5a970-1de1-4166-815b-81097dfe20ce-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.158597 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmfwn\" (UniqueName: \"kubernetes.io/projected/ac10c353-ed34-4f82-ad22-dc0065fbb96e-kube-api-access-xmfwn\") pod \"console-65dd6d4bcb-h8fs2\" (UID: \"ac10c353-ed34-4f82-ad22-dc0065fbb96e\") " pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.158633 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac10c353-ed34-4f82-ad22-dc0065fbb96e-oauth-serving-cert\") pod \"console-65dd6d4bcb-h8fs2\" (UID: \"ac10c353-ed34-4f82-ad22-dc0065fbb96e\") " pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.158693 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3fe5a970-1de1-4166-815b-81097dfe20ce-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.158785 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac10c353-ed34-4f82-ad22-dc0065fbb96e-console-serving-cert\") pod \"console-65dd6d4bcb-h8fs2\" (UID: \"ac10c353-ed34-4f82-ad22-dc0065fbb96e\") " pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.158828 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3fe5a970-1de1-4166-815b-81097dfe20ce-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.159140 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3fe5a970-1de1-4166-815b-81097dfe20ce-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.163717 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3fe5a970-1de1-4166-815b-81097dfe20ce-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.167410 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.167459 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/616b5aed7df61e2529192d0a714d9bffe231d3fa4b75c2ae3a6ad54f9059d388/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.167653 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3fe5a970-1de1-4166-815b-81097dfe20ce-config\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.174008 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3fe5a970-1de1-4166-815b-81097dfe20ce-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.180457 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3fe5a970-1de1-4166-815b-81097dfe20ce-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.180519 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3fe5a970-1de1-4166-815b-81097dfe20ce-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.185862 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jcx6\" (UniqueName: \"kubernetes.io/projected/3fe5a970-1de1-4166-815b-81097dfe20ce-kube-api-access-8jcx6\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.260590 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac10c353-ed34-4f82-ad22-dc0065fbb96e-service-ca\") pod \"console-65dd6d4bcb-h8fs2\" (UID: \"ac10c353-ed34-4f82-ad22-dc0065fbb96e\") " pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.260696 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmfwn\" (UniqueName: \"kubernetes.io/projected/ac10c353-ed34-4f82-ad22-dc0065fbb96e-kube-api-access-xmfwn\") pod \"console-65dd6d4bcb-h8fs2\" (UID: \"ac10c353-ed34-4f82-ad22-dc0065fbb96e\") " pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.260743 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac10c353-ed34-4f82-ad22-dc0065fbb96e-oauth-serving-cert\") pod \"console-65dd6d4bcb-h8fs2\" (UID: \"ac10c353-ed34-4f82-ad22-dc0065fbb96e\") " pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.260817 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac10c353-ed34-4f82-ad22-dc0065fbb96e-console-serving-cert\") pod \"console-65dd6d4bcb-h8fs2\" (UID: \"ac10c353-ed34-4f82-ad22-dc0065fbb96e\") " pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.261218 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac10c353-ed34-4f82-ad22-dc0065fbb96e-console-oauth-config\") pod \"console-65dd6d4bcb-h8fs2\" (UID: \"ac10c353-ed34-4f82-ad22-dc0065fbb96e\") " pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.261250 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac10c353-ed34-4f82-ad22-dc0065fbb96e-console-config\") pod \"console-65dd6d4bcb-h8fs2\" (UID: \"ac10c353-ed34-4f82-ad22-dc0065fbb96e\") " pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.261307 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac10c353-ed34-4f82-ad22-dc0065fbb96e-trusted-ca-bundle\") pod \"console-65dd6d4bcb-h8fs2\" (UID: \"ac10c353-ed34-4f82-ad22-dc0065fbb96e\") " pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.262825 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0\") pod \"prometheus-metric-storage-0\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.262884 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac10c353-ed34-4f82-ad22-dc0065fbb96e-trusted-ca-bundle\") pod \"console-65dd6d4bcb-h8fs2\" (UID: \"ac10c353-ed34-4f82-ad22-dc0065fbb96e\") " pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.263825 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac10c353-ed34-4f82-ad22-dc0065fbb96e-service-ca\") pod \"console-65dd6d4bcb-h8fs2\" (UID: \"ac10c353-ed34-4f82-ad22-dc0065fbb96e\") " pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.266978 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac10c353-ed34-4f82-ad22-dc0065fbb96e-console-serving-cert\") pod \"console-65dd6d4bcb-h8fs2\" (UID: \"ac10c353-ed34-4f82-ad22-dc0065fbb96e\") " pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.268649 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac10c353-ed34-4f82-ad22-dc0065fbb96e-oauth-serving-cert\") pod \"console-65dd6d4bcb-h8fs2\" (UID: \"ac10c353-ed34-4f82-ad22-dc0065fbb96e\") " pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.270872 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac10c353-ed34-4f82-ad22-dc0065fbb96e-console-config\") pod \"console-65dd6d4bcb-h8fs2\" (UID: \"ac10c353-ed34-4f82-ad22-dc0065fbb96e\") " pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.273799 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac10c353-ed34-4f82-ad22-dc0065fbb96e-console-oauth-config\") pod \"console-65dd6d4bcb-h8fs2\" (UID: \"ac10c353-ed34-4f82-ad22-dc0065fbb96e\") " pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.274461 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.299986 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmfwn\" (UniqueName: \"kubernetes.io/projected/ac10c353-ed34-4f82-ad22-dc0065fbb96e-kube-api-access-xmfwn\") pod \"console-65dd6d4bcb-h8fs2\" (UID: \"ac10c353-ed34-4f82-ad22-dc0065fbb96e\") " pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.364586 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e57a8f08-dbcd-439a-9d89-d11a3bf7f33b-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-7mr5f\" (UID: \"e57a8f08-dbcd-439a-9d89-d11a3bf7f33b\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-7mr5f" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.371629 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e57a8f08-dbcd-439a-9d89-d11a3bf7f33b-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-7mr5f\" (UID: \"e57a8f08-dbcd-439a-9d89-d11a3bf7f33b\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-7mr5f" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.387028 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:24 crc kubenswrapper[4693]: I1212 16:10:24.533284 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-7mr5f" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.133303 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-tzjth"] Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.135892 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.138114 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.138353 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.138730 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-xwgmx" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.165084 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-tzjth"] Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.194057 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-296qr"] Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.233102 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.271105 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-296qr"] Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.303433 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/636fcc75-4f63-4bf9-bcfe-8d0720896f25-ovn-controller-tls-certs\") pod \"ovn-controller-tzjth\" (UID: \"636fcc75-4f63-4bf9-bcfe-8d0720896f25\") " pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.304004 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/636fcc75-4f63-4bf9-bcfe-8d0720896f25-var-run-ovn\") pod \"ovn-controller-tzjth\" (UID: \"636fcc75-4f63-4bf9-bcfe-8d0720896f25\") " pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.304144 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/636fcc75-4f63-4bf9-bcfe-8d0720896f25-scripts\") pod \"ovn-controller-tzjth\" (UID: \"636fcc75-4f63-4bf9-bcfe-8d0720896f25\") " pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.304249 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/636fcc75-4f63-4bf9-bcfe-8d0720896f25-var-run\") pod \"ovn-controller-tzjth\" (UID: \"636fcc75-4f63-4bf9-bcfe-8d0720896f25\") " pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.304389 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/636fcc75-4f63-4bf9-bcfe-8d0720896f25-combined-ca-bundle\") pod \"ovn-controller-tzjth\" (UID: \"636fcc75-4f63-4bf9-bcfe-8d0720896f25\") " pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.304528 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/636fcc75-4f63-4bf9-bcfe-8d0720896f25-var-log-ovn\") pod \"ovn-controller-tzjth\" (UID: \"636fcc75-4f63-4bf9-bcfe-8d0720896f25\") " pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.304564 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spmn7\" (UniqueName: \"kubernetes.io/projected/636fcc75-4f63-4bf9-bcfe-8d0720896f25-kube-api-access-spmn7\") pod \"ovn-controller-tzjth\" (UID: \"636fcc75-4f63-4bf9-bcfe-8d0720896f25\") " pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.408434 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/636fcc75-4f63-4bf9-bcfe-8d0720896f25-combined-ca-bundle\") pod \"ovn-controller-tzjth\" (UID: \"636fcc75-4f63-4bf9-bcfe-8d0720896f25\") " pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.408512 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/636fcc75-4f63-4bf9-bcfe-8d0720896f25-var-log-ovn\") pod \"ovn-controller-tzjth\" (UID: \"636fcc75-4f63-4bf9-bcfe-8d0720896f25\") " pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.408537 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spmn7\" (UniqueName: \"kubernetes.io/projected/636fcc75-4f63-4bf9-bcfe-8d0720896f25-kube-api-access-spmn7\") pod \"ovn-controller-tzjth\" (UID: \"636fcc75-4f63-4bf9-bcfe-8d0720896f25\") " pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.409107 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/636fcc75-4f63-4bf9-bcfe-8d0720896f25-var-log-ovn\") pod \"ovn-controller-tzjth\" (UID: \"636fcc75-4f63-4bf9-bcfe-8d0720896f25\") " pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.415028 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/636fcc75-4f63-4bf9-bcfe-8d0720896f25-ovn-controller-tls-certs\") pod \"ovn-controller-tzjth\" (UID: \"636fcc75-4f63-4bf9-bcfe-8d0720896f25\") " pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.415134 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/af0d9145-66a5-493b-9528-debabd220fb0-etc-ovs\") pod \"ovn-controller-ovs-296qr\" (UID: \"af0d9145-66a5-493b-9528-debabd220fb0\") " pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.415200 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/af0d9145-66a5-493b-9528-debabd220fb0-var-lib\") pod \"ovn-controller-ovs-296qr\" (UID: \"af0d9145-66a5-493b-9528-debabd220fb0\") " pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.415248 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/636fcc75-4f63-4bf9-bcfe-8d0720896f25-var-run-ovn\") pod \"ovn-controller-tzjth\" (UID: \"636fcc75-4f63-4bf9-bcfe-8d0720896f25\") " pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.415304 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/af0d9145-66a5-493b-9528-debabd220fb0-var-run\") pod \"ovn-controller-ovs-296qr\" (UID: \"af0d9145-66a5-493b-9528-debabd220fb0\") " pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.415509 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/af0d9145-66a5-493b-9528-debabd220fb0-var-log\") pod \"ovn-controller-ovs-296qr\" (UID: \"af0d9145-66a5-493b-9528-debabd220fb0\") " pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.415551 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4dc6\" (UniqueName: \"kubernetes.io/projected/af0d9145-66a5-493b-9528-debabd220fb0-kube-api-access-z4dc6\") pod \"ovn-controller-ovs-296qr\" (UID: \"af0d9145-66a5-493b-9528-debabd220fb0\") " pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.416063 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/636fcc75-4f63-4bf9-bcfe-8d0720896f25-scripts\") pod \"ovn-controller-tzjth\" (UID: \"636fcc75-4f63-4bf9-bcfe-8d0720896f25\") " pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.416112 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af0d9145-66a5-493b-9528-debabd220fb0-scripts\") pod \"ovn-controller-ovs-296qr\" (UID: \"af0d9145-66a5-493b-9528-debabd220fb0\") " pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.416648 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/636fcc75-4f63-4bf9-bcfe-8d0720896f25-var-run\") pod \"ovn-controller-tzjth\" (UID: \"636fcc75-4f63-4bf9-bcfe-8d0720896f25\") " pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.417053 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/636fcc75-4f63-4bf9-bcfe-8d0720896f25-var-run\") pod \"ovn-controller-tzjth\" (UID: \"636fcc75-4f63-4bf9-bcfe-8d0720896f25\") " pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.417195 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/636fcc75-4f63-4bf9-bcfe-8d0720896f25-var-run-ovn\") pod \"ovn-controller-tzjth\" (UID: \"636fcc75-4f63-4bf9-bcfe-8d0720896f25\") " pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.419538 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/636fcc75-4f63-4bf9-bcfe-8d0720896f25-combined-ca-bundle\") pod \"ovn-controller-tzjth\" (UID: \"636fcc75-4f63-4bf9-bcfe-8d0720896f25\") " pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.421193 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/636fcc75-4f63-4bf9-bcfe-8d0720896f25-scripts\") pod \"ovn-controller-tzjth\" (UID: \"636fcc75-4f63-4bf9-bcfe-8d0720896f25\") " pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.424022 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/636fcc75-4f63-4bf9-bcfe-8d0720896f25-ovn-controller-tls-certs\") pod \"ovn-controller-tzjth\" (UID: \"636fcc75-4f63-4bf9-bcfe-8d0720896f25\") " pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.433979 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spmn7\" (UniqueName: \"kubernetes.io/projected/636fcc75-4f63-4bf9-bcfe-8d0720896f25-kube-api-access-spmn7\") pod \"ovn-controller-tzjth\" (UID: \"636fcc75-4f63-4bf9-bcfe-8d0720896f25\") " pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.479153 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tzjth" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.519096 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/af0d9145-66a5-493b-9528-debabd220fb0-etc-ovs\") pod \"ovn-controller-ovs-296qr\" (UID: \"af0d9145-66a5-493b-9528-debabd220fb0\") " pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.519158 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/af0d9145-66a5-493b-9528-debabd220fb0-var-lib\") pod \"ovn-controller-ovs-296qr\" (UID: \"af0d9145-66a5-493b-9528-debabd220fb0\") " pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.519202 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/af0d9145-66a5-493b-9528-debabd220fb0-var-run\") pod \"ovn-controller-ovs-296qr\" (UID: \"af0d9145-66a5-493b-9528-debabd220fb0\") " pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.519236 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/af0d9145-66a5-493b-9528-debabd220fb0-var-log\") pod \"ovn-controller-ovs-296qr\" (UID: \"af0d9145-66a5-493b-9528-debabd220fb0\") " pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.519261 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4dc6\" (UniqueName: \"kubernetes.io/projected/af0d9145-66a5-493b-9528-debabd220fb0-kube-api-access-z4dc6\") pod \"ovn-controller-ovs-296qr\" (UID: \"af0d9145-66a5-493b-9528-debabd220fb0\") " pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.519309 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af0d9145-66a5-493b-9528-debabd220fb0-scripts\") pod \"ovn-controller-ovs-296qr\" (UID: \"af0d9145-66a5-493b-9528-debabd220fb0\") " pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.523109 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/af0d9145-66a5-493b-9528-debabd220fb0-var-run\") pod \"ovn-controller-ovs-296qr\" (UID: \"af0d9145-66a5-493b-9528-debabd220fb0\") " pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.523300 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/af0d9145-66a5-493b-9528-debabd220fb0-etc-ovs\") pod \"ovn-controller-ovs-296qr\" (UID: \"af0d9145-66a5-493b-9528-debabd220fb0\") " pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.523411 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/af0d9145-66a5-493b-9528-debabd220fb0-var-lib\") pod \"ovn-controller-ovs-296qr\" (UID: \"af0d9145-66a5-493b-9528-debabd220fb0\") " pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.523464 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/af0d9145-66a5-493b-9528-debabd220fb0-var-log\") pod \"ovn-controller-ovs-296qr\" (UID: \"af0d9145-66a5-493b-9528-debabd220fb0\") " pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.529667 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af0d9145-66a5-493b-9528-debabd220fb0-scripts\") pod \"ovn-controller-ovs-296qr\" (UID: \"af0d9145-66a5-493b-9528-debabd220fb0\") " pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.563011 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4dc6\" (UniqueName: \"kubernetes.io/projected/af0d9145-66a5-493b-9528-debabd220fb0-kube-api-access-z4dc6\") pod \"ovn-controller-ovs-296qr\" (UID: \"af0d9145-66a5-493b-9528-debabd220fb0\") " pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:10:25 crc kubenswrapper[4693]: I1212 16:10:25.852249 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.015169 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.017531 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.021988 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.022220 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-k66hh" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.022404 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.022560 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.022718 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.031004 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.133547 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35312611-07ec-4f42-b175-e31630485c05-config\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.133640 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35312611-07ec-4f42-b175-e31630485c05-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.133733 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/35312611-07ec-4f42-b175-e31630485c05-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.133769 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35312611-07ec-4f42-b175-e31630485c05-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.133805 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fa46ac57-a173-4557-ab03-88f714b8c558\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fa46ac57-a173-4557-ab03-88f714b8c558\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.133886 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/35312611-07ec-4f42-b175-e31630485c05-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.133921 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crpw6\" (UniqueName: \"kubernetes.io/projected/35312611-07ec-4f42-b175-e31630485c05-kube-api-access-crpw6\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.133949 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/35312611-07ec-4f42-b175-e31630485c05-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.235500 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fa46ac57-a173-4557-ab03-88f714b8c558\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fa46ac57-a173-4557-ab03-88f714b8c558\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.235593 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/35312611-07ec-4f42-b175-e31630485c05-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.235618 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crpw6\" (UniqueName: \"kubernetes.io/projected/35312611-07ec-4f42-b175-e31630485c05-kube-api-access-crpw6\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.235643 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/35312611-07ec-4f42-b175-e31630485c05-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.235700 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35312611-07ec-4f42-b175-e31630485c05-config\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.235740 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35312611-07ec-4f42-b175-e31630485c05-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.235811 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/35312611-07ec-4f42-b175-e31630485c05-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.235836 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35312611-07ec-4f42-b175-e31630485c05-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.237638 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/35312611-07ec-4f42-b175-e31630485c05-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.237656 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35312611-07ec-4f42-b175-e31630485c05-config\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.239537 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.239562 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fa46ac57-a173-4557-ab03-88f714b8c558\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fa46ac57-a173-4557-ab03-88f714b8c558\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e730b0bf617829bff9fe610bb94f0cbf8cbaeff335ed3246403ca1c01073ff60/globalmount\"" pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.240121 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35312611-07ec-4f42-b175-e31630485c05-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.251515 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/35312611-07ec-4f42-b175-e31630485c05-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.255393 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35312611-07ec-4f42-b175-e31630485c05-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.255899 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crpw6\" (UniqueName: \"kubernetes.io/projected/35312611-07ec-4f42-b175-e31630485c05-kube-api-access-crpw6\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.261231 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/35312611-07ec-4f42-b175-e31630485c05-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.296461 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fa46ac57-a173-4557-ab03-88f714b8c558\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fa46ac57-a173-4557-ab03-88f714b8c558\") pod \"ovsdbserver-nb-0\" (UID: \"35312611-07ec-4f42-b175-e31630485c05\") " pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:26 crc kubenswrapper[4693]: I1212 16:10:26.407780 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 12 16:10:28 crc kubenswrapper[4693]: I1212 16:10:28.944582 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 12 16:10:28 crc kubenswrapper[4693]: I1212 16:10:28.946418 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:28 crc kubenswrapper[4693]: I1212 16:10:28.949425 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-8fc6m" Dec 12 16:10:28 crc kubenswrapper[4693]: I1212 16:10:28.949559 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 12 16:10:28 crc kubenswrapper[4693]: I1212 16:10:28.949685 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 12 16:10:28 crc kubenswrapper[4693]: I1212 16:10:28.949977 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 12 16:10:28 crc kubenswrapper[4693]: I1212 16:10:28.959045 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.112065 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/745f2c7a-de43-450e-a05b-0dcc7d3b834a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.112145 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/745f2c7a-de43-450e-a05b-0dcc7d3b834a-config\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.112169 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/745f2c7a-de43-450e-a05b-0dcc7d3b834a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.112205 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/745f2c7a-de43-450e-a05b-0dcc7d3b834a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.112221 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t6c5\" (UniqueName: \"kubernetes.io/projected/745f2c7a-de43-450e-a05b-0dcc7d3b834a-kube-api-access-6t6c5\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.112261 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/745f2c7a-de43-450e-a05b-0dcc7d3b834a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.112325 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9f6bd4df-86aa-4d28-a18d-15d2d6b5fe64\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f6bd4df-86aa-4d28-a18d-15d2d6b5fe64\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.112363 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/745f2c7a-de43-450e-a05b-0dcc7d3b834a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.214764 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/745f2c7a-de43-450e-a05b-0dcc7d3b834a-config\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.215077 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/745f2c7a-de43-450e-a05b-0dcc7d3b834a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.215179 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/745f2c7a-de43-450e-a05b-0dcc7d3b834a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.215316 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t6c5\" (UniqueName: \"kubernetes.io/projected/745f2c7a-de43-450e-a05b-0dcc7d3b834a-kube-api-access-6t6c5\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.215538 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/745f2c7a-de43-450e-a05b-0dcc7d3b834a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.215628 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/745f2c7a-de43-450e-a05b-0dcc7d3b834a-config\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.215947 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/745f2c7a-de43-450e-a05b-0dcc7d3b834a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.216680 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9f6bd4df-86aa-4d28-a18d-15d2d6b5fe64\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f6bd4df-86aa-4d28-a18d-15d2d6b5fe64\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.217099 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/745f2c7a-de43-450e-a05b-0dcc7d3b834a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.217302 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/745f2c7a-de43-450e-a05b-0dcc7d3b834a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.217264 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/745f2c7a-de43-450e-a05b-0dcc7d3b834a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.220519 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.220559 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9f6bd4df-86aa-4d28-a18d-15d2d6b5fe64\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f6bd4df-86aa-4d28-a18d-15d2d6b5fe64\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4891d557f23b801381ab0569e2c654f82c461688013c1d07bd076c1ec114641a/globalmount\"" pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.222027 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/745f2c7a-de43-450e-a05b-0dcc7d3b834a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.222328 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/745f2c7a-de43-450e-a05b-0dcc7d3b834a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.226425 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/745f2c7a-de43-450e-a05b-0dcc7d3b834a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.235021 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t6c5\" (UniqueName: \"kubernetes.io/projected/745f2c7a-de43-450e-a05b-0dcc7d3b834a-kube-api-access-6t6c5\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.259195 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9f6bd4df-86aa-4d28-a18d-15d2d6b5fe64\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f6bd4df-86aa-4d28-a18d-15d2d6b5fe64\") pod \"ovsdbserver-sb-0\" (UID: \"745f2c7a-de43-450e-a05b-0dcc7d3b834a\") " pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:29 crc kubenswrapper[4693]: I1212 16:10:29.276205 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 12 16:10:37 crc kubenswrapper[4693]: E1212 16:10:37.810576 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 12 16:10:37 crc kubenswrapper[4693]: E1212 16:10:37.811158 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fq6zv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-46hld_openstack(c14f1710-5bb5-4333-b584-d5bff01ec285): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:10:37 crc kubenswrapper[4693]: E1212 16:10:37.812558 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-46hld" podUID="c14f1710-5bb5-4333-b584-d5bff01ec285" Dec 12 16:10:39 crc kubenswrapper[4693]: E1212 16:10:39.401052 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 12 16:10:39 crc kubenswrapper[4693]: E1212 16:10:39.401493 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xws2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-4l2q9_openstack(917b9605-32a3-4e61-9127-aff641344aa3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:10:39 crc kubenswrapper[4693]: E1212 16:10:39.402695 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" podUID="917b9605-32a3-4e61-9127-aff641344aa3" Dec 12 16:10:39 crc kubenswrapper[4693]: E1212 16:10:39.910147 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" podUID="917b9605-32a3-4e61-9127-aff641344aa3" Dec 12 16:10:39 crc kubenswrapper[4693]: E1212 16:10:39.992798 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 12 16:10:39 crc kubenswrapper[4693]: E1212 16:10:39.993198 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-74bcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-xmjz8_openstack(53107282-9d7c-4944-9da2-8efd5ef1500d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:10:39 crc kubenswrapper[4693]: E1212 16:10:39.994463 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-xmjz8" podUID="53107282-9d7c-4944-9da2-8efd5ef1500d" Dec 12 16:10:40 crc kubenswrapper[4693]: I1212 16:10:40.459465 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-46hld" Dec 12 16:10:40 crc kubenswrapper[4693]: I1212 16:10:40.562796 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c14f1710-5bb5-4333-b584-d5bff01ec285-dns-svc\") pod \"c14f1710-5bb5-4333-b584-d5bff01ec285\" (UID: \"c14f1710-5bb5-4333-b584-d5bff01ec285\") " Dec 12 16:10:40 crc kubenswrapper[4693]: I1212 16:10:40.563263 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq6zv\" (UniqueName: \"kubernetes.io/projected/c14f1710-5bb5-4333-b584-d5bff01ec285-kube-api-access-fq6zv\") pod \"c14f1710-5bb5-4333-b584-d5bff01ec285\" (UID: \"c14f1710-5bb5-4333-b584-d5bff01ec285\") " Dec 12 16:10:40 crc kubenswrapper[4693]: I1212 16:10:40.563407 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c14f1710-5bb5-4333-b584-d5bff01ec285-config\") pod \"c14f1710-5bb5-4333-b584-d5bff01ec285\" (UID: \"c14f1710-5bb5-4333-b584-d5bff01ec285\") " Dec 12 16:10:40 crc kubenswrapper[4693]: I1212 16:10:40.565669 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c14f1710-5bb5-4333-b584-d5bff01ec285-config" (OuterVolumeSpecName: "config") pod "c14f1710-5bb5-4333-b584-d5bff01ec285" (UID: "c14f1710-5bb5-4333-b584-d5bff01ec285"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:10:40 crc kubenswrapper[4693]: I1212 16:10:40.566513 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c14f1710-5bb5-4333-b584-d5bff01ec285-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c14f1710-5bb5-4333-b584-d5bff01ec285" (UID: "c14f1710-5bb5-4333-b584-d5bff01ec285"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:10:40 crc kubenswrapper[4693]: I1212 16:10:40.573534 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c14f1710-5bb5-4333-b584-d5bff01ec285-kube-api-access-fq6zv" (OuterVolumeSpecName: "kube-api-access-fq6zv") pod "c14f1710-5bb5-4333-b584-d5bff01ec285" (UID: "c14f1710-5bb5-4333-b584-d5bff01ec285"). InnerVolumeSpecName "kube-api-access-fq6zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:10:40 crc kubenswrapper[4693]: I1212 16:10:40.636666 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 12 16:10:40 crc kubenswrapper[4693]: I1212 16:10:40.666407 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c14f1710-5bb5-4333-b584-d5bff01ec285-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:10:40 crc kubenswrapper[4693]: I1212 16:10:40.666448 4693 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c14f1710-5bb5-4333-b584-d5bff01ec285-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 16:10:40 crc kubenswrapper[4693]: I1212 16:10:40.666467 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq6zv\" (UniqueName: \"kubernetes.io/projected/c14f1710-5bb5-4333-b584-d5bff01ec285-kube-api-access-fq6zv\") on node \"crc\" DevicePath \"\"" Dec 12 16:10:40 crc kubenswrapper[4693]: W1212 16:10:40.689719 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fe5a970_1de1_4166_815b_81097dfe20ce.slice/crio-35b30b29b7da8a025aa9088941729352dc4ae3db7b6add81767fce39f440ef00 WatchSource:0}: Error finding container 35b30b29b7da8a025aa9088941729352dc4ae3db7b6add81767fce39f440ef00: Status 404 returned error can't find the container with id 35b30b29b7da8a025aa9088941729352dc4ae3db7b6add81767fce39f440ef00 Dec 12 16:10:41 crc kubenswrapper[4693]: I1212 16:10:40.918302 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3fe5a970-1de1-4166-815b-81097dfe20ce","Type":"ContainerStarted","Data":"35b30b29b7da8a025aa9088941729352dc4ae3db7b6add81767fce39f440ef00"} Dec 12 16:10:41 crc kubenswrapper[4693]: I1212 16:10:40.920534 4693 generic.go:334] "Generic (PLEG): container finished" podID="b5827a44-9073-412a-90ec-653b5ac3f5fd" containerID="91721f41cfc8320be57837e7006b01189b808edb4067dfabe9a9afe731c3233b" exitCode=0 Dec 12 16:10:41 crc kubenswrapper[4693]: I1212 16:10:40.920605 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" event={"ID":"b5827a44-9073-412a-90ec-653b5ac3f5fd","Type":"ContainerDied","Data":"91721f41cfc8320be57837e7006b01189b808edb4067dfabe9a9afe731c3233b"} Dec 12 16:10:41 crc kubenswrapper[4693]: I1212 16:10:40.921899 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-46hld" event={"ID":"c14f1710-5bb5-4333-b584-d5bff01ec285","Type":"ContainerDied","Data":"e86a4cbec584236b9452f75be7af39bac6e10e093513f40103a0b8db9eb95fb2"} Dec 12 16:10:41 crc kubenswrapper[4693]: I1212 16:10:40.921983 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-46hld" Dec 12 16:10:41 crc kubenswrapper[4693]: I1212 16:10:41.049973 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-7mr5f"] Dec 12 16:10:41 crc kubenswrapper[4693]: I1212 16:10:41.062853 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65dd6d4bcb-h8fs2"] Dec 12 16:10:41 crc kubenswrapper[4693]: I1212 16:10:41.071655 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-tzjth"] Dec 12 16:10:41 crc kubenswrapper[4693]: I1212 16:10:41.080026 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 12 16:10:41 crc kubenswrapper[4693]: W1212 16:10:41.097085 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode57a8f08_dbcd_439a_9d89_d11a3bf7f33b.slice/crio-8b3c31948006a372c6721e73537a009051aa720c9b56c08f5257c09081d6b134 WatchSource:0}: Error finding container 8b3c31948006a372c6721e73537a009051aa720c9b56c08f5257c09081d6b134: Status 404 returned error can't find the container with id 8b3c31948006a372c6721e73537a009051aa720c9b56c08f5257c09081d6b134 Dec 12 16:10:41 crc kubenswrapper[4693]: W1212 16:10:41.104126 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac10c353_ed34_4f82_ad22_dc0065fbb96e.slice/crio-cde9e77da3295600be33cd58cb7c2c63e9214827fbc2a6c517bdbba77fae0394 WatchSource:0}: Error finding container cde9e77da3295600be33cd58cb7c2c63e9214827fbc2a6c517bdbba77fae0394: Status 404 returned error can't find the container with id cde9e77da3295600be33cd58cb7c2c63e9214827fbc2a6c517bdbba77fae0394 Dec 12 16:10:41 crc kubenswrapper[4693]: W1212 16:10:41.213759 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod636fcc75_4f63_4bf9_bcfe_8d0720896f25.slice/crio-899a53ad3c0e1688235c2cc9ea067275699a26ae711e79f173e98dcb6a7723d0 WatchSource:0}: Error finding container 899a53ad3c0e1688235c2cc9ea067275699a26ae711e79f173e98dcb6a7723d0: Status 404 returned error can't find the container with id 899a53ad3c0e1688235c2cc9ea067275699a26ae711e79f173e98dcb6a7723d0 Dec 12 16:10:41 crc kubenswrapper[4693]: W1212 16:10:41.297496 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d8c08a3_7925_40d9_afbc_76755b6c0263.slice/crio-bd1eea82fd61550040eda29a85c53aea6ed1518251559670f30e067dce797ef2 WatchSource:0}: Error finding container bd1eea82fd61550040eda29a85c53aea6ed1518251559670f30e067dce797ef2: Status 404 returned error can't find the container with id bd1eea82fd61550040eda29a85c53aea6ed1518251559670f30e067dce797ef2 Dec 12 16:10:41 crc kubenswrapper[4693]: I1212 16:10:41.606009 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-296qr"] Dec 12 16:10:41 crc kubenswrapper[4693]: W1212 16:10:41.696487 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf0d9145_66a5_493b_9528_debabd220fb0.slice/crio-744da9dc35462f5aedf00f27ac09531cfe7947998e7ddcac6008e6b42f1799df WatchSource:0}: Error finding container 744da9dc35462f5aedf00f27ac09531cfe7947998e7ddcac6008e6b42f1799df: Status 404 returned error can't find the container with id 744da9dc35462f5aedf00f27ac09531cfe7947998e7ddcac6008e6b42f1799df Dec 12 16:10:41 crc kubenswrapper[4693]: I1212 16:10:41.967495 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65dd6d4bcb-h8fs2" event={"ID":"ac10c353-ed34-4f82-ad22-dc0065fbb96e","Type":"ContainerStarted","Data":"cde9e77da3295600be33cd58cb7c2c63e9214827fbc2a6c517bdbba77fae0394"} Dec 12 16:10:41 crc kubenswrapper[4693]: I1212 16:10:41.982685 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ec693a73-a415-42a1-98f4-86438aa58d56","Type":"ContainerStarted","Data":"d6989e0783abb9e32a4631136ca6766f2ae5c109472e6bbbe73e86de12a85aa1"} Dec 12 16:10:42 crc kubenswrapper[4693]: I1212 16:10:42.001605 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8f5c0331-ca61-42ce-989a-499b5eb81bc9","Type":"ContainerStarted","Data":"bd355a81ddcd6184854dc52e9d9dd60af02daa782efef3a056bed33a6851a15b"} Dec 12 16:10:42 crc kubenswrapper[4693]: I1212 16:10:42.002613 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 12 16:10:42 crc kubenswrapper[4693]: I1212 16:10:42.004963 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-7mr5f" event={"ID":"e57a8f08-dbcd-439a-9d89-d11a3bf7f33b","Type":"ContainerStarted","Data":"8b3c31948006a372c6721e73537a009051aa720c9b56c08f5257c09081d6b134"} Dec 12 16:10:42 crc kubenswrapper[4693]: I1212 16:10:42.006156 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-296qr" event={"ID":"af0d9145-66a5-493b-9528-debabd220fb0","Type":"ContainerStarted","Data":"744da9dc35462f5aedf00f27ac09531cfe7947998e7ddcac6008e6b42f1799df"} Dec 12 16:10:42 crc kubenswrapper[4693]: I1212 16:10:42.008549 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tzjth" event={"ID":"636fcc75-4f63-4bf9-bcfe-8d0720896f25","Type":"ContainerStarted","Data":"899a53ad3c0e1688235c2cc9ea067275699a26ae711e79f173e98dcb6a7723d0"} Dec 12 16:10:42 crc kubenswrapper[4693]: I1212 16:10:42.010544 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"73559c8b-d017-4a5d-aced-3da25d264b0a","Type":"ContainerStarted","Data":"d6100f4486653c75e81cd31bddcf5411d1a68e6e09b0e0ba1d90c9c98831cca8"} Dec 12 16:10:42 crc kubenswrapper[4693]: I1212 16:10:42.011593 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0d8c08a3-7925-40d9-afbc-76755b6c0263","Type":"ContainerStarted","Data":"bd1eea82fd61550040eda29a85c53aea6ed1518251559670f30e067dce797ef2"} Dec 12 16:10:42 crc kubenswrapper[4693]: I1212 16:10:42.045772 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 12 16:10:42 crc kubenswrapper[4693]: I1212 16:10:42.073201 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=4.095140472 podStartE2EDuration="23.073176953s" podCreationTimestamp="2025-12-12 16:10:19 +0000 UTC" firstStartedPulling="2025-12-12 16:10:21.144104469 +0000 UTC m=+1448.312744070" lastFinishedPulling="2025-12-12 16:10:40.12214095 +0000 UTC m=+1467.290780551" observedRunningTime="2025-12-12 16:10:42.049016333 +0000 UTC m=+1469.217655934" watchObservedRunningTime="2025-12-12 16:10:42.073176953 +0000 UTC m=+1469.241816564" Dec 12 16:10:42 crc kubenswrapper[4693]: I1212 16:10:42.412009 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xmjz8" Dec 12 16:10:42 crc kubenswrapper[4693]: I1212 16:10:42.523620 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74bcl\" (UniqueName: \"kubernetes.io/projected/53107282-9d7c-4944-9da2-8efd5ef1500d-kube-api-access-74bcl\") pod \"53107282-9d7c-4944-9da2-8efd5ef1500d\" (UID: \"53107282-9d7c-4944-9da2-8efd5ef1500d\") " Dec 12 16:10:42 crc kubenswrapper[4693]: I1212 16:10:42.523828 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53107282-9d7c-4944-9da2-8efd5ef1500d-config\") pod \"53107282-9d7c-4944-9da2-8efd5ef1500d\" (UID: \"53107282-9d7c-4944-9da2-8efd5ef1500d\") " Dec 12 16:10:42 crc kubenswrapper[4693]: I1212 16:10:42.524380 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53107282-9d7c-4944-9da2-8efd5ef1500d-config" (OuterVolumeSpecName: "config") pod "53107282-9d7c-4944-9da2-8efd5ef1500d" (UID: "53107282-9d7c-4944-9da2-8efd5ef1500d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:10:42 crc kubenswrapper[4693]: I1212 16:10:42.531641 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53107282-9d7c-4944-9da2-8efd5ef1500d-kube-api-access-74bcl" (OuterVolumeSpecName: "kube-api-access-74bcl") pod "53107282-9d7c-4944-9da2-8efd5ef1500d" (UID: "53107282-9d7c-4944-9da2-8efd5ef1500d"). InnerVolumeSpecName "kube-api-access-74bcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:10:42 crc kubenswrapper[4693]: E1212 16:10:42.615641 4693 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Dec 12 16:10:42 crc kubenswrapper[4693]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/b5827a44-9073-412a-90ec-653b5ac3f5fd/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 12 16:10:42 crc kubenswrapper[4693]: > podSandboxID="e2024a156dca85df3754725ff427c2d1e4ec7f6c85d942da702ed7961c0f1438" Dec 12 16:10:42 crc kubenswrapper[4693]: E1212 16:10:42.615777 4693 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 12 16:10:42 crc kubenswrapper[4693]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ps9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-6cwfg_openstack(b5827a44-9073-412a-90ec-653b5ac3f5fd): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/b5827a44-9073-412a-90ec-653b5ac3f5fd/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 12 16:10:42 crc kubenswrapper[4693]: > logger="UnhandledError" Dec 12 16:10:42 crc kubenswrapper[4693]: E1212 16:10:42.617303 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/b5827a44-9073-412a-90ec-653b5ac3f5fd/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" podUID="b5827a44-9073-412a-90ec-653b5ac3f5fd" Dec 12 16:10:42 crc kubenswrapper[4693]: I1212 16:10:42.626314 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53107282-9d7c-4944-9da2-8efd5ef1500d-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:10:42 crc kubenswrapper[4693]: I1212 16:10:42.626361 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74bcl\" (UniqueName: \"kubernetes.io/projected/53107282-9d7c-4944-9da2-8efd5ef1500d-kube-api-access-74bcl\") on node \"crc\" DevicePath \"\"" Dec 12 16:10:42 crc kubenswrapper[4693]: I1212 16:10:42.707596 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 12 16:10:43 crc kubenswrapper[4693]: I1212 16:10:43.022629 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"745f2c7a-de43-450e-a05b-0dcc7d3b834a","Type":"ContainerStarted","Data":"cee2dfb0a94fd82ba065dae89a011af3b22883f6e4d5feb942a66c62448c299a"} Dec 12 16:10:43 crc kubenswrapper[4693]: I1212 16:10:43.023971 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-xmjz8" event={"ID":"53107282-9d7c-4944-9da2-8efd5ef1500d","Type":"ContainerDied","Data":"cedfd21b4681fb61164b77df88deb522e975c0125c62d2d34c9eec45de290c75"} Dec 12 16:10:43 crc kubenswrapper[4693]: I1212 16:10:43.024101 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xmjz8" Dec 12 16:10:43 crc kubenswrapper[4693]: I1212 16:10:43.030869 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"d45363e2-3684-4fc6-b322-d99e6e87d3fd","Type":"ContainerStarted","Data":"d9cff3e0e49928af617fc29c4e6db7c48889e3152f393792640f1c5d6d777637"} Dec 12 16:10:43 crc kubenswrapper[4693]: I1212 16:10:43.036117 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"62a37a53-6f53-4b51-b493-edfdb42c3a93","Type":"ContainerStarted","Data":"22692ec927135356ca36a5a4cb2f6741bb3735d358eacd1c581a9edc0bb9e830"} Dec 12 16:10:43 crc kubenswrapper[4693]: I1212 16:10:43.038246 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db","Type":"ContainerStarted","Data":"6263a31351a8b66e00866ea115ef46512c88016a614fa3e3c0c54fd7cd8bc30a"} Dec 12 16:10:43 crc kubenswrapper[4693]: I1212 16:10:43.039747 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65dd6d4bcb-h8fs2" event={"ID":"ac10c353-ed34-4f82-ad22-dc0065fbb96e","Type":"ContainerStarted","Data":"df2c0ba9a10de9c6284da4af4cfdf8984f720e1a8f7248d0acad338c9985812e"} Dec 12 16:10:43 crc kubenswrapper[4693]: I1212 16:10:43.043517 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6fd6556d-68c5-4492-804c-bc3188ab39b7","Type":"ContainerStarted","Data":"3be89743cd21d75b5646c0e66954be38785af6de6dae899ed1446b1366deecb8"} Dec 12 16:10:43 crc kubenswrapper[4693]: I1212 16:10:43.048467 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"35312611-07ec-4f42-b175-e31630485c05","Type":"ContainerStarted","Data":"5eff87c3ec94c89b7aaf2451c4aca2d48d5ee6b0d3f98637d61e58479f9099c0"} Dec 12 16:10:43 crc kubenswrapper[4693]: I1212 16:10:43.158342 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xmjz8"] Dec 12 16:10:43 crc kubenswrapper[4693]: I1212 16:10:43.174016 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xmjz8"] Dec 12 16:10:43 crc kubenswrapper[4693]: I1212 16:10:43.236264 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-65dd6d4bcb-h8fs2" podStartSLOduration=20.23623672 podStartE2EDuration="20.23623672s" podCreationTimestamp="2025-12-12 16:10:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:10:43.231931484 +0000 UTC m=+1470.400571095" watchObservedRunningTime="2025-12-12 16:10:43.23623672 +0000 UTC m=+1470.404876321" Dec 12 16:10:43 crc kubenswrapper[4693]: I1212 16:10:43.375412 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53107282-9d7c-4944-9da2-8efd5ef1500d" path="/var/lib/kubelet/pods/53107282-9d7c-4944-9da2-8efd5ef1500d/volumes" Dec 12 16:10:44 crc kubenswrapper[4693]: I1212 16:10:44.387749 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:44 crc kubenswrapper[4693]: I1212 16:10:44.388105 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:44 crc kubenswrapper[4693]: I1212 16:10:44.393205 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:45 crc kubenswrapper[4693]: I1212 16:10:45.074145 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 16:10:45 crc kubenswrapper[4693]: I1212 16:10:45.152201 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d7d4f8db7-5gfv7"] Dec 12 16:10:46 crc kubenswrapper[4693]: I1212 16:10:46.082356 4693 generic.go:334] "Generic (PLEG): container finished" podID="73559c8b-d017-4a5d-aced-3da25d264b0a" containerID="d6100f4486653c75e81cd31bddcf5411d1a68e6e09b0e0ba1d90c9c98831cca8" exitCode=0 Dec 12 16:10:46 crc kubenswrapper[4693]: I1212 16:10:46.082420 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"73559c8b-d017-4a5d-aced-3da25d264b0a","Type":"ContainerDied","Data":"d6100f4486653c75e81cd31bddcf5411d1a68e6e09b0e0ba1d90c9c98831cca8"} Dec 12 16:10:46 crc kubenswrapper[4693]: I1212 16:10:46.086239 4693 generic.go:334] "Generic (PLEG): container finished" podID="ec693a73-a415-42a1-98f4-86438aa58d56" containerID="d6989e0783abb9e32a4631136ca6766f2ae5c109472e6bbbe73e86de12a85aa1" exitCode=0 Dec 12 16:10:46 crc kubenswrapper[4693]: I1212 16:10:46.087358 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ec693a73-a415-42a1-98f4-86438aa58d56","Type":"ContainerDied","Data":"d6989e0783abb9e32a4631136ca6766f2ae5c109472e6bbbe73e86de12a85aa1"} Dec 12 16:10:50 crc kubenswrapper[4693]: I1212 16:10:50.136200 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-296qr" event={"ID":"af0d9145-66a5-493b-9528-debabd220fb0","Type":"ContainerStarted","Data":"b7b3ed9cf80b2135e766f979f88ec588c81f46dccf3948175643860d028f5e96"} Dec 12 16:10:50 crc kubenswrapper[4693]: I1212 16:10:50.143413 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tzjth" event={"ID":"636fcc75-4f63-4bf9-bcfe-8d0720896f25","Type":"ContainerStarted","Data":"985da017c36f94e434d3e9e29d414db448bc5fab142cd65282b4e4bd60efe356"} Dec 12 16:10:50 crc kubenswrapper[4693]: I1212 16:10:50.143576 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-tzjth" Dec 12 16:10:50 crc kubenswrapper[4693]: I1212 16:10:50.147329 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"73559c8b-d017-4a5d-aced-3da25d264b0a","Type":"ContainerStarted","Data":"ecf679915acb4c8bd632299eb618fc9ab601608b029a46cdfa94cef79871b750"} Dec 12 16:10:50 crc kubenswrapper[4693]: I1212 16:10:50.150193 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ec693a73-a415-42a1-98f4-86438aa58d56","Type":"ContainerStarted","Data":"ee2909168c683d47031dfb2427036a864486334bf9662c1d0420c4430180999b"} Dec 12 16:10:50 crc kubenswrapper[4693]: I1212 16:10:50.153505 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"35312611-07ec-4f42-b175-e31630485c05","Type":"ContainerStarted","Data":"86c376800574692693223dafcd34b19ebe3089c5bfc0fc2c9407dd656cd1f77d"} Dec 12 16:10:50 crc kubenswrapper[4693]: I1212 16:10:50.157341 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" event={"ID":"b5827a44-9073-412a-90ec-653b5ac3f5fd","Type":"ContainerStarted","Data":"fecdd48ca99acb8b9346023cbbbfcb64889582cae9090ca80a85da8870bf3172"} Dec 12 16:10:50 crc kubenswrapper[4693]: I1212 16:10:50.158669 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" Dec 12 16:10:50 crc kubenswrapper[4693]: I1212 16:10:50.161941 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-7mr5f" event={"ID":"e57a8f08-dbcd-439a-9d89-d11a3bf7f33b","Type":"ContainerStarted","Data":"e765e6b3168131a8541adf558e5dfed77f7d4ae8509ef68a1a201106d40b85c7"} Dec 12 16:10:50 crc kubenswrapper[4693]: I1212 16:10:50.170166 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"745f2c7a-de43-450e-a05b-0dcc7d3b834a","Type":"ContainerStarted","Data":"a3bdd8c21b2611cb662c275419fa589421c2cae7e664863cf78a33c93e1ef16f"} Dec 12 16:10:50 crc kubenswrapper[4693]: I1212 16:10:50.186815 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=13.079293064 podStartE2EDuration="32.186797428s" podCreationTimestamp="2025-12-12 16:10:18 +0000 UTC" firstStartedPulling="2025-12-12 16:10:20.983095148 +0000 UTC m=+1448.151734749" lastFinishedPulling="2025-12-12 16:10:40.090599512 +0000 UTC m=+1467.259239113" observedRunningTime="2025-12-12 16:10:50.185587786 +0000 UTC m=+1477.354227387" watchObservedRunningTime="2025-12-12 16:10:50.186797428 +0000 UTC m=+1477.355437029" Dec 12 16:10:50 crc kubenswrapper[4693]: I1212 16:10:50.226380 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-tzjth" podStartSLOduration=20.030280758 podStartE2EDuration="25.226358572s" podCreationTimestamp="2025-12-12 16:10:25 +0000 UTC" firstStartedPulling="2025-12-12 16:10:41.219645033 +0000 UTC m=+1468.388284634" lastFinishedPulling="2025-12-12 16:10:46.415722847 +0000 UTC m=+1473.584362448" observedRunningTime="2025-12-12 16:10:50.221790909 +0000 UTC m=+1477.390430530" watchObservedRunningTime="2025-12-12 16:10:50.226358572 +0000 UTC m=+1477.394998173" Dec 12 16:10:50 crc kubenswrapper[4693]: I1212 16:10:50.233748 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" podStartSLOduration=11.810982708000001 podStartE2EDuration="35.233725111s" podCreationTimestamp="2025-12-12 16:10:15 +0000 UTC" firstStartedPulling="2025-12-12 16:10:16.706401526 +0000 UTC m=+1443.875041127" lastFinishedPulling="2025-12-12 16:10:40.129143929 +0000 UTC m=+1467.297783530" observedRunningTime="2025-12-12 16:10:50.204462113 +0000 UTC m=+1477.373101724" watchObservedRunningTime="2025-12-12 16:10:50.233725111 +0000 UTC m=+1477.402364712" Dec 12 16:10:50 crc kubenswrapper[4693]: I1212 16:10:50.247747 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-7mr5f" podStartSLOduration=22.208995316 podStartE2EDuration="27.247723907s" podCreationTimestamp="2025-12-12 16:10:23 +0000 UTC" firstStartedPulling="2025-12-12 16:10:41.102989395 +0000 UTC m=+1468.271628996" lastFinishedPulling="2025-12-12 16:10:46.141717986 +0000 UTC m=+1473.310357587" observedRunningTime="2025-12-12 16:10:50.237591685 +0000 UTC m=+1477.406231286" watchObservedRunningTime="2025-12-12 16:10:50.247723907 +0000 UTC m=+1477.416363508" Dec 12 16:10:50 crc kubenswrapper[4693]: I1212 16:10:50.268095 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=12.595417808 podStartE2EDuration="33.268073954s" podCreationTimestamp="2025-12-12 16:10:17 +0000 UTC" firstStartedPulling="2025-12-12 16:10:19.484987389 +0000 UTC m=+1446.653626990" lastFinishedPulling="2025-12-12 16:10:40.157643535 +0000 UTC m=+1467.326283136" observedRunningTime="2025-12-12 16:10:50.253507813 +0000 UTC m=+1477.422147414" watchObservedRunningTime="2025-12-12 16:10:50.268073954 +0000 UTC m=+1477.436713555" Dec 12 16:10:50 crc kubenswrapper[4693]: I1212 16:10:50.428732 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:50 crc kubenswrapper[4693]: I1212 16:10:50.428795 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 12 16:10:50 crc kubenswrapper[4693]: I1212 16:10:50.451632 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 12 16:10:51 crc kubenswrapper[4693]: I1212 16:10:51.182567 4693 generic.go:334] "Generic (PLEG): container finished" podID="af0d9145-66a5-493b-9528-debabd220fb0" containerID="b7b3ed9cf80b2135e766f979f88ec588c81f46dccf3948175643860d028f5e96" exitCode=0 Dec 12 16:10:51 crc kubenswrapper[4693]: I1212 16:10:51.182936 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-296qr" event={"ID":"af0d9145-66a5-493b-9528-debabd220fb0","Type":"ContainerDied","Data":"b7b3ed9cf80b2135e766f979f88ec588c81f46dccf3948175643860d028f5e96"} Dec 12 16:10:51 crc kubenswrapper[4693]: I1212 16:10:51.194379 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0d8c08a3-7925-40d9-afbc-76755b6c0263","Type":"ContainerStarted","Data":"b162c33663d6a2f0b2616e8c1086ae14834c01bc6b2ccc7ecf16122053555837"} Dec 12 16:10:51 crc kubenswrapper[4693]: I1212 16:10:51.238342 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=20.97902211 podStartE2EDuration="29.238305933s" podCreationTimestamp="2025-12-12 16:10:22 +0000 UTC" firstStartedPulling="2025-12-12 16:10:41.313458167 +0000 UTC m=+1468.482097778" lastFinishedPulling="2025-12-12 16:10:49.572742 +0000 UTC m=+1476.741381601" observedRunningTime="2025-12-12 16:10:51.230114862 +0000 UTC m=+1478.398754463" watchObservedRunningTime="2025-12-12 16:10:51.238305933 +0000 UTC m=+1478.406945534" Dec 12 16:10:52 crc kubenswrapper[4693]: I1212 16:10:52.206914 4693 generic.go:334] "Generic (PLEG): container finished" podID="917b9605-32a3-4e61-9127-aff641344aa3" containerID="4ba170d1942dd23d3e8157805a856cdcd0928956df78dbc731eca145b83ddd84" exitCode=0 Dec 12 16:10:52 crc kubenswrapper[4693]: I1212 16:10:52.207630 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" event={"ID":"917b9605-32a3-4e61-9127-aff641344aa3","Type":"ContainerDied","Data":"4ba170d1942dd23d3e8157805a856cdcd0928956df78dbc731eca145b83ddd84"} Dec 12 16:10:52 crc kubenswrapper[4693]: I1212 16:10:52.212754 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-296qr" event={"ID":"af0d9145-66a5-493b-9528-debabd220fb0","Type":"ContainerStarted","Data":"948423e5efa2537b45b1f8eb65b19e061085a0f57b14f1b21e97fd27c3362ce3"} Dec 12 16:10:52 crc kubenswrapper[4693]: I1212 16:10:52.212911 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 12 16:10:52 crc kubenswrapper[4693]: I1212 16:10:52.987922 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4l2q9"] Dec 12 16:10:53 crc kubenswrapper[4693]: I1212 16:10:53.054543 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-2thk6"] Dec 12 16:10:53 crc kubenswrapper[4693]: I1212 16:10:53.068588 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" Dec 12 16:10:53 crc kubenswrapper[4693]: I1212 16:10:53.069709 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-2thk6"] Dec 12 16:10:53 crc kubenswrapper[4693]: I1212 16:10:53.194912 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c229b29c-37d6-4d57-b983-9093c267bdef-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-2thk6\" (UID: \"c229b29c-37d6-4d57-b983-9093c267bdef\") " pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" Dec 12 16:10:53 crc kubenswrapper[4693]: I1212 16:10:53.195014 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-498zp\" (UniqueName: \"kubernetes.io/projected/c229b29c-37d6-4d57-b983-9093c267bdef-kube-api-access-498zp\") pod \"dnsmasq-dns-7cb5889db5-2thk6\" (UID: \"c229b29c-37d6-4d57-b983-9093c267bdef\") " pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" Dec 12 16:10:53 crc kubenswrapper[4693]: I1212 16:10:53.195182 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c229b29c-37d6-4d57-b983-9093c267bdef-config\") pod \"dnsmasq-dns-7cb5889db5-2thk6\" (UID: \"c229b29c-37d6-4d57-b983-9093c267bdef\") " pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" Dec 12 16:10:53 crc kubenswrapper[4693]: I1212 16:10:53.302224 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c229b29c-37d6-4d57-b983-9093c267bdef-config\") pod \"dnsmasq-dns-7cb5889db5-2thk6\" (UID: \"c229b29c-37d6-4d57-b983-9093c267bdef\") " pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" Dec 12 16:10:53 crc kubenswrapper[4693]: I1212 16:10:53.302306 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c229b29c-37d6-4d57-b983-9093c267bdef-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-2thk6\" (UID: \"c229b29c-37d6-4d57-b983-9093c267bdef\") " pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" Dec 12 16:10:53 crc kubenswrapper[4693]: I1212 16:10:53.302372 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-498zp\" (UniqueName: \"kubernetes.io/projected/c229b29c-37d6-4d57-b983-9093c267bdef-kube-api-access-498zp\") pod \"dnsmasq-dns-7cb5889db5-2thk6\" (UID: \"c229b29c-37d6-4d57-b983-9093c267bdef\") " pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" Dec 12 16:10:53 crc kubenswrapper[4693]: I1212 16:10:53.303804 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c229b29c-37d6-4d57-b983-9093c267bdef-config\") pod \"dnsmasq-dns-7cb5889db5-2thk6\" (UID: \"c229b29c-37d6-4d57-b983-9093c267bdef\") " pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" Dec 12 16:10:53 crc kubenswrapper[4693]: I1212 16:10:53.303858 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c229b29c-37d6-4d57-b983-9093c267bdef-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-2thk6\" (UID: \"c229b29c-37d6-4d57-b983-9093c267bdef\") " pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" Dec 12 16:10:53 crc kubenswrapper[4693]: I1212 16:10:53.327699 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-498zp\" (UniqueName: \"kubernetes.io/projected/c229b29c-37d6-4d57-b983-9093c267bdef-kube-api-access-498zp\") pod \"dnsmasq-dns-7cb5889db5-2thk6\" (UID: \"c229b29c-37d6-4d57-b983-9093c267bdef\") " pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" Dec 12 16:10:53 crc kubenswrapper[4693]: I1212 16:10:53.404427 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.106590 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.114743 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.121869 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.122315 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-jmsxv" Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.122586 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.122806 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.128009 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.242875 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3fe5a970-1de1-4166-815b-81097dfe20ce","Type":"ContainerStarted","Data":"5a0f8541c5a51323408ef686b59150c1f68f5a9887ed62a424c2c47b92efa9ad"} Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.254411 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-etc-swift\") pod \"swift-storage-0\" (UID: \"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c\") " pod="openstack/swift-storage-0" Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.254728 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-lock\") pod \"swift-storage-0\" (UID: \"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c\") " pod="openstack/swift-storage-0" Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.254959 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-96b5205e-5323-4f2e-9cb0-90872264a457\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96b5205e-5323-4f2e-9cb0-90872264a457\") pod \"swift-storage-0\" (UID: \"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c\") " pod="openstack/swift-storage-0" Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.255011 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-cache\") pod \"swift-storage-0\" (UID: \"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c\") " pod="openstack/swift-storage-0" Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.255062 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpnl4\" (UniqueName: \"kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-kube-api-access-kpnl4\") pod \"swift-storage-0\" (UID: \"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c\") " pod="openstack/swift-storage-0" Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.356533 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-etc-swift\") pod \"swift-storage-0\" (UID: \"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c\") " pod="openstack/swift-storage-0" Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.356622 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-lock\") pod \"swift-storage-0\" (UID: \"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c\") " pod="openstack/swift-storage-0" Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.356666 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-96b5205e-5323-4f2e-9cb0-90872264a457\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96b5205e-5323-4f2e-9cb0-90872264a457\") pod \"swift-storage-0\" (UID: \"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c\") " pod="openstack/swift-storage-0" Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.356686 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-cache\") pod \"swift-storage-0\" (UID: \"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c\") " pod="openstack/swift-storage-0" Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.356708 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpnl4\" (UniqueName: \"kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-kube-api-access-kpnl4\") pod \"swift-storage-0\" (UID: \"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c\") " pod="openstack/swift-storage-0" Dec 12 16:10:54 crc kubenswrapper[4693]: E1212 16:10:54.356769 4693 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 12 16:10:54 crc kubenswrapper[4693]: E1212 16:10:54.356799 4693 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 12 16:10:54 crc kubenswrapper[4693]: E1212 16:10:54.356866 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-etc-swift podName:39fd15fe-bbdd-49d4-95cc-70049f5b8d3c nodeName:}" failed. No retries permitted until 2025-12-12 16:10:54.85684223 +0000 UTC m=+1482.025481901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-etc-swift") pod "swift-storage-0" (UID: "39fd15fe-bbdd-49d4-95cc-70049f5b8d3c") : configmap "swift-ring-files" not found Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.357209 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-lock\") pod \"swift-storage-0\" (UID: \"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c\") " pod="openstack/swift-storage-0" Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.357412 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-cache\") pod \"swift-storage-0\" (UID: \"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c\") " pod="openstack/swift-storage-0" Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.361831 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.361866 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-96b5205e-5323-4f2e-9cb0-90872264a457\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96b5205e-5323-4f2e-9cb0-90872264a457\") pod \"swift-storage-0\" (UID: \"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/136ded1b8385e46ffe0ed939272c89211df3f8826cb62a13fd9c0196bd3d6724/globalmount\"" pod="openstack/swift-storage-0" Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.384742 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpnl4\" (UniqueName: \"kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-kube-api-access-kpnl4\") pod \"swift-storage-0\" (UID: \"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c\") " pod="openstack/swift-storage-0" Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.403978 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-96b5205e-5323-4f2e-9cb0-90872264a457\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96b5205e-5323-4f2e-9cb0-90872264a457\") pod \"swift-storage-0\" (UID: \"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c\") " pod="openstack/swift-storage-0" Dec 12 16:10:54 crc kubenswrapper[4693]: I1212 16:10:54.867277 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-etc-swift\") pod \"swift-storage-0\" (UID: \"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c\") " pod="openstack/swift-storage-0" Dec 12 16:10:54 crc kubenswrapper[4693]: E1212 16:10:54.867542 4693 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 12 16:10:54 crc kubenswrapper[4693]: E1212 16:10:54.867783 4693 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 12 16:10:54 crc kubenswrapper[4693]: E1212 16:10:54.867882 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-etc-swift podName:39fd15fe-bbdd-49d4-95cc-70049f5b8d3c nodeName:}" failed. No retries permitted until 2025-12-12 16:10:55.867853147 +0000 UTC m=+1483.036492778 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-etc-swift") pod "swift-storage-0" (UID: "39fd15fe-bbdd-49d4-95cc-70049f5b8d3c") : configmap "swift-ring-files" not found Dec 12 16:10:55 crc kubenswrapper[4693]: I1212 16:10:55.888538 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-etc-swift\") pod \"swift-storage-0\" (UID: \"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c\") " pod="openstack/swift-storage-0" Dec 12 16:10:55 crc kubenswrapper[4693]: E1212 16:10:55.888830 4693 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 12 16:10:55 crc kubenswrapper[4693]: E1212 16:10:55.888853 4693 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 12 16:10:55 crc kubenswrapper[4693]: E1212 16:10:55.888921 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-etc-swift podName:39fd15fe-bbdd-49d4-95cc-70049f5b8d3c nodeName:}" failed. No retries permitted until 2025-12-12 16:10:57.888896753 +0000 UTC m=+1485.057536364 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-etc-swift") pod "swift-storage-0" (UID: "39fd15fe-bbdd-49d4-95cc-70049f5b8d3c") : configmap "swift-ring-files" not found Dec 12 16:10:55 crc kubenswrapper[4693]: I1212 16:10:55.936496 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" Dec 12 16:10:57 crc kubenswrapper[4693]: I1212 16:10:57.941948 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-etc-swift\") pod \"swift-storage-0\" (UID: \"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c\") " pod="openstack/swift-storage-0" Dec 12 16:10:57 crc kubenswrapper[4693]: E1212 16:10:57.942584 4693 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 12 16:10:57 crc kubenswrapper[4693]: E1212 16:10:57.942599 4693 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 12 16:10:57 crc kubenswrapper[4693]: E1212 16:10:57.942647 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-etc-swift podName:39fd15fe-bbdd-49d4-95cc-70049f5b8d3c nodeName:}" failed. No retries permitted until 2025-12-12 16:11:01.942633337 +0000 UTC m=+1489.111272938 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-etc-swift") pod "swift-storage-0" (UID: "39fd15fe-bbdd-49d4-95cc-70049f5b8d3c") : configmap "swift-ring-files" not found Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.143693 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-b8xvf"] Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.145333 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.154523 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-b8xvf"] Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.160692 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.160885 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.161454 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.248976 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e230bf0-bb21-469f-ad05-1d061026d73f-combined-ca-bundle\") pod \"swift-ring-rebalance-b8xvf\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.249055 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3e230bf0-bb21-469f-ad05-1d061026d73f-dispersionconf\") pod \"swift-ring-rebalance-b8xvf\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.249129 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3e230bf0-bb21-469f-ad05-1d061026d73f-ring-data-devices\") pod \"swift-ring-rebalance-b8xvf\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.249210 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e230bf0-bb21-469f-ad05-1d061026d73f-scripts\") pod \"swift-ring-rebalance-b8xvf\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.249253 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t87f\" (UniqueName: \"kubernetes.io/projected/3e230bf0-bb21-469f-ad05-1d061026d73f-kube-api-access-4t87f\") pod \"swift-ring-rebalance-b8xvf\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.249688 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3e230bf0-bb21-469f-ad05-1d061026d73f-etc-swift\") pod \"swift-ring-rebalance-b8xvf\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.250013 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3e230bf0-bb21-469f-ad05-1d061026d73f-swiftconf\") pod \"swift-ring-rebalance-b8xvf\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.352793 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3e230bf0-bb21-469f-ad05-1d061026d73f-etc-swift\") pod \"swift-ring-rebalance-b8xvf\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.352914 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3e230bf0-bb21-469f-ad05-1d061026d73f-swiftconf\") pod \"swift-ring-rebalance-b8xvf\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.352988 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e230bf0-bb21-469f-ad05-1d061026d73f-combined-ca-bundle\") pod \"swift-ring-rebalance-b8xvf\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.353018 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3e230bf0-bb21-469f-ad05-1d061026d73f-dispersionconf\") pod \"swift-ring-rebalance-b8xvf\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.353071 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3e230bf0-bb21-469f-ad05-1d061026d73f-ring-data-devices\") pod \"swift-ring-rebalance-b8xvf\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.353126 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e230bf0-bb21-469f-ad05-1d061026d73f-scripts\") pod \"swift-ring-rebalance-b8xvf\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.353153 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t87f\" (UniqueName: \"kubernetes.io/projected/3e230bf0-bb21-469f-ad05-1d061026d73f-kube-api-access-4t87f\") pod \"swift-ring-rebalance-b8xvf\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.353304 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3e230bf0-bb21-469f-ad05-1d061026d73f-etc-swift\") pod \"swift-ring-rebalance-b8xvf\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.353988 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3e230bf0-bb21-469f-ad05-1d061026d73f-ring-data-devices\") pod \"swift-ring-rebalance-b8xvf\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.354218 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e230bf0-bb21-469f-ad05-1d061026d73f-scripts\") pod \"swift-ring-rebalance-b8xvf\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.366007 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3e230bf0-bb21-469f-ad05-1d061026d73f-swiftconf\") pod \"swift-ring-rebalance-b8xvf\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.366113 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e230bf0-bb21-469f-ad05-1d061026d73f-combined-ca-bundle\") pod \"swift-ring-rebalance-b8xvf\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.374778 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3e230bf0-bb21-469f-ad05-1d061026d73f-dispersionconf\") pod \"swift-ring-rebalance-b8xvf\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.380695 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t87f\" (UniqueName: \"kubernetes.io/projected/3e230bf0-bb21-469f-ad05-1d061026d73f-kube-api-access-4t87f\") pod \"swift-ring-rebalance-b8xvf\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.486991 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.833669 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 12 16:10:58 crc kubenswrapper[4693]: I1212 16:10:58.833744 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 12 16:11:00 crc kubenswrapper[4693]: I1212 16:11:00.406193 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 12 16:11:00 crc kubenswrapper[4693]: I1212 16:11:00.522629 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 12 16:11:01 crc kubenswrapper[4693]: I1212 16:11:01.445491 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" podUID="917b9605-32a3-4e61-9127-aff641344aa3" containerName="dnsmasq-dns" containerID="cri-o://d3b2171a12eddade655a37869d49a170d6afd9612d01f57d08322ae7a210d9d3" gracePeriod=10 Dec 12 16:11:01 crc kubenswrapper[4693]: I1212 16:11:01.472980 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"35312611-07ec-4f42-b175-e31630485c05","Type":"ContainerStarted","Data":"0521196c2332cff6082e8ed86617b2db5bdbf7ac2acd9108ff57014fe4abdc05"} Dec 12 16:11:01 crc kubenswrapper[4693]: I1212 16:11:01.473402 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" event={"ID":"917b9605-32a3-4e61-9127-aff641344aa3","Type":"ContainerStarted","Data":"d3b2171a12eddade655a37869d49a170d6afd9612d01f57d08322ae7a210d9d3"} Dec 12 16:11:01 crc kubenswrapper[4693]: I1212 16:11:01.473429 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" Dec 12 16:11:01 crc kubenswrapper[4693]: I1212 16:11:01.473443 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"745f2c7a-de43-450e-a05b-0dcc7d3b834a","Type":"ContainerStarted","Data":"8e8a67ae65a5f807d523211ccbe2a4a4905cfc86386a096fbc9ed29a3e072b19"} Dec 12 16:11:01 crc kubenswrapper[4693]: I1212 16:11:01.476442 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-296qr" event={"ID":"af0d9145-66a5-493b-9528-debabd220fb0","Type":"ContainerStarted","Data":"8e0eb141f72b8d68da3154e8cb324797ecb870824c6938e655ddb6d1448ce2aa"} Dec 12 16:11:01 crc kubenswrapper[4693]: I1212 16:11:01.476610 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:11:01 crc kubenswrapper[4693]: I1212 16:11:01.476631 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:11:01 crc kubenswrapper[4693]: I1212 16:11:01.477863 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=18.657643024 podStartE2EDuration="37.477848142s" podCreationTimestamp="2025-12-12 16:10:24 +0000 UTC" firstStartedPulling="2025-12-12 16:10:42.193096359 +0000 UTC m=+1469.361735960" lastFinishedPulling="2025-12-12 16:11:01.013301477 +0000 UTC m=+1488.181941078" observedRunningTime="2025-12-12 16:11:01.470266818 +0000 UTC m=+1488.638906419" watchObservedRunningTime="2025-12-12 16:11:01.477848142 +0000 UTC m=+1488.646487743" Dec 12 16:11:01 crc kubenswrapper[4693]: W1212 16:11:01.494642 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc229b29c_37d6_4d57_b983_9093c267bdef.slice/crio-3170f843e13199b2e2b26bf34338b6708fa2bb92a6c4fb2aff50ce3d12a2bb56 WatchSource:0}: Error finding container 3170f843e13199b2e2b26bf34338b6708fa2bb92a6c4fb2aff50ce3d12a2bb56: Status 404 returned error can't find the container with id 3170f843e13199b2e2b26bf34338b6708fa2bb92a6c4fb2aff50ce3d12a2bb56 Dec 12 16:11:01 crc kubenswrapper[4693]: I1212 16:11:01.500368 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-b8xvf"] Dec 12 16:11:01 crc kubenswrapper[4693]: I1212 16:11:01.530709 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-2thk6"] Dec 12 16:11:01 crc kubenswrapper[4693]: I1212 16:11:01.545785 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=16.380664333 podStartE2EDuration="34.545764239s" podCreationTimestamp="2025-12-12 16:10:27 +0000 UTC" firstStartedPulling="2025-12-12 16:10:42.814091824 +0000 UTC m=+1469.982731425" lastFinishedPulling="2025-12-12 16:11:00.97919174 +0000 UTC m=+1488.147831331" observedRunningTime="2025-12-12 16:11:01.501346184 +0000 UTC m=+1488.669985795" watchObservedRunningTime="2025-12-12 16:11:01.545764239 +0000 UTC m=+1488.714403850" Dec 12 16:11:01 crc kubenswrapper[4693]: I1212 16:11:01.563142 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" podStartSLOduration=-9223371990.291662 podStartE2EDuration="46.563113106s" podCreationTimestamp="2025-12-12 16:10:15 +0000 UTC" firstStartedPulling="2025-12-12 16:10:16.986170712 +0000 UTC m=+1444.154810303" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:11:01.515475964 +0000 UTC m=+1488.684115565" watchObservedRunningTime="2025-12-12 16:11:01.563113106 +0000 UTC m=+1488.731752717" Dec 12 16:11:01 crc kubenswrapper[4693]: I1212 16:11:01.570262 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-296qr" podStartSLOduration=32.072972001 podStartE2EDuration="36.570236727s" podCreationTimestamp="2025-12-12 16:10:25 +0000 UTC" firstStartedPulling="2025-12-12 16:10:41.701491055 +0000 UTC m=+1468.870130656" lastFinishedPulling="2025-12-12 16:10:46.198755781 +0000 UTC m=+1473.367395382" observedRunningTime="2025-12-12 16:11:01.541250038 +0000 UTC m=+1488.709889649" watchObservedRunningTime="2025-12-12 16:11:01.570236727 +0000 UTC m=+1488.738876328" Dec 12 16:11:01 crc kubenswrapper[4693]: I1212 16:11:01.925187 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" Dec 12 16:11:01 crc kubenswrapper[4693]: I1212 16:11:01.955487 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-etc-swift\") pod \"swift-storage-0\" (UID: \"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c\") " pod="openstack/swift-storage-0" Dec 12 16:11:01 crc kubenswrapper[4693]: E1212 16:11:01.955684 4693 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 12 16:11:01 crc kubenswrapper[4693]: E1212 16:11:01.955715 4693 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 12 16:11:01 crc kubenswrapper[4693]: E1212 16:11:01.955765 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-etc-swift podName:39fd15fe-bbdd-49d4-95cc-70049f5b8d3c nodeName:}" failed. No retries permitted until 2025-12-12 16:11:09.955749277 +0000 UTC m=+1497.124388878 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-etc-swift") pod "swift-storage-0" (UID: "39fd15fe-bbdd-49d4-95cc-70049f5b8d3c") : configmap "swift-ring-files" not found Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.057317 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xws2l\" (UniqueName: \"kubernetes.io/projected/917b9605-32a3-4e61-9127-aff641344aa3-kube-api-access-xws2l\") pod \"917b9605-32a3-4e61-9127-aff641344aa3\" (UID: \"917b9605-32a3-4e61-9127-aff641344aa3\") " Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.057476 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/917b9605-32a3-4e61-9127-aff641344aa3-config\") pod \"917b9605-32a3-4e61-9127-aff641344aa3\" (UID: \"917b9605-32a3-4e61-9127-aff641344aa3\") " Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.057505 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/917b9605-32a3-4e61-9127-aff641344aa3-dns-svc\") pod \"917b9605-32a3-4e61-9127-aff641344aa3\" (UID: \"917b9605-32a3-4e61-9127-aff641344aa3\") " Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.062444 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/917b9605-32a3-4e61-9127-aff641344aa3-kube-api-access-xws2l" (OuterVolumeSpecName: "kube-api-access-xws2l") pod "917b9605-32a3-4e61-9127-aff641344aa3" (UID: "917b9605-32a3-4e61-9127-aff641344aa3"). InnerVolumeSpecName "kube-api-access-xws2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.106964 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/917b9605-32a3-4e61-9127-aff641344aa3-config" (OuterVolumeSpecName: "config") pod "917b9605-32a3-4e61-9127-aff641344aa3" (UID: "917b9605-32a3-4e61-9127-aff641344aa3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.107694 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/917b9605-32a3-4e61-9127-aff641344aa3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "917b9605-32a3-4e61-9127-aff641344aa3" (UID: "917b9605-32a3-4e61-9127-aff641344aa3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.159496 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/917b9605-32a3-4e61-9127-aff641344aa3-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.159538 4693 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/917b9605-32a3-4e61-9127-aff641344aa3-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.159553 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xws2l\" (UniqueName: \"kubernetes.io/projected/917b9605-32a3-4e61-9127-aff641344aa3-kube-api-access-xws2l\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.277345 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.344417 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.408552 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.461110 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.496984 4693 generic.go:334] "Generic (PLEG): container finished" podID="917b9605-32a3-4e61-9127-aff641344aa3" containerID="d3b2171a12eddade655a37869d49a170d6afd9612d01f57d08322ae7a210d9d3" exitCode=0 Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.497027 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" event={"ID":"917b9605-32a3-4e61-9127-aff641344aa3","Type":"ContainerDied","Data":"d3b2171a12eddade655a37869d49a170d6afd9612d01f57d08322ae7a210d9d3"} Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.497065 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" event={"ID":"917b9605-32a3-4e61-9127-aff641344aa3","Type":"ContainerDied","Data":"46280df7afc0dc8cdaa80b68f62f836a2f72250a27e46e2af8de3aec48f9d31c"} Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.497086 4693 scope.go:117] "RemoveContainer" containerID="d3b2171a12eddade655a37869d49a170d6afd9612d01f57d08322ae7a210d9d3" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.497088 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4l2q9" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.500589 4693 generic.go:334] "Generic (PLEG): container finished" podID="c229b29c-37d6-4d57-b983-9093c267bdef" containerID="2e81ed81c502e228d7feaca2bff473b26d91b3fac4a8f0f9da9d0a0342e7f629" exitCode=0 Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.500639 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" event={"ID":"c229b29c-37d6-4d57-b983-9093c267bdef","Type":"ContainerDied","Data":"2e81ed81c502e228d7feaca2bff473b26d91b3fac4a8f0f9da9d0a0342e7f629"} Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.500661 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" event={"ID":"c229b29c-37d6-4d57-b983-9093c267bdef","Type":"ContainerStarted","Data":"3170f843e13199b2e2b26bf34338b6708fa2bb92a6c4fb2aff50ce3d12a2bb56"} Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.509856 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-b8xvf" event={"ID":"3e230bf0-bb21-469f-ad05-1d061026d73f","Type":"ContainerStarted","Data":"eb58a19cea89e495a7f07ee4ebe7f74cb28093c6dbfc510f3bd6b24bd1284eda"} Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.513073 4693 generic.go:334] "Generic (PLEG): container finished" podID="3fe5a970-1de1-4166-815b-81097dfe20ce" containerID="5a0f8541c5a51323408ef686b59150c1f68f5a9887ed62a424c2c47b92efa9ad" exitCode=0 Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.513172 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3fe5a970-1de1-4166-815b-81097dfe20ce","Type":"ContainerDied","Data":"5a0f8541c5a51323408ef686b59150c1f68f5a9887ed62a424c2c47b92efa9ad"} Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.514505 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.514521 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.569771 4693 scope.go:117] "RemoveContainer" containerID="4ba170d1942dd23d3e8157805a856cdcd0928956df78dbc731eca145b83ddd84" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.583496 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.587471 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.665419 4693 scope.go:117] "RemoveContainer" containerID="d3b2171a12eddade655a37869d49a170d6afd9612d01f57d08322ae7a210d9d3" Dec 12 16:11:02 crc kubenswrapper[4693]: E1212 16:11:02.669216 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3b2171a12eddade655a37869d49a170d6afd9612d01f57d08322ae7a210d9d3\": container with ID starting with d3b2171a12eddade655a37869d49a170d6afd9612d01f57d08322ae7a210d9d3 not found: ID does not exist" containerID="d3b2171a12eddade655a37869d49a170d6afd9612d01f57d08322ae7a210d9d3" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.669295 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3b2171a12eddade655a37869d49a170d6afd9612d01f57d08322ae7a210d9d3"} err="failed to get container status \"d3b2171a12eddade655a37869d49a170d6afd9612d01f57d08322ae7a210d9d3\": rpc error: code = NotFound desc = could not find container \"d3b2171a12eddade655a37869d49a170d6afd9612d01f57d08322ae7a210d9d3\": container with ID starting with d3b2171a12eddade655a37869d49a170d6afd9612d01f57d08322ae7a210d9d3 not found: ID does not exist" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.669323 4693 scope.go:117] "RemoveContainer" containerID="4ba170d1942dd23d3e8157805a856cdcd0928956df78dbc731eca145b83ddd84" Dec 12 16:11:02 crc kubenswrapper[4693]: E1212 16:11:02.669681 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ba170d1942dd23d3e8157805a856cdcd0928956df78dbc731eca145b83ddd84\": container with ID starting with 4ba170d1942dd23d3e8157805a856cdcd0928956df78dbc731eca145b83ddd84 not found: ID does not exist" containerID="4ba170d1942dd23d3e8157805a856cdcd0928956df78dbc731eca145b83ddd84" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.669705 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ba170d1942dd23d3e8157805a856cdcd0928956df78dbc731eca145b83ddd84"} err="failed to get container status \"4ba170d1942dd23d3e8157805a856cdcd0928956df78dbc731eca145b83ddd84\": rpc error: code = NotFound desc = could not find container \"4ba170d1942dd23d3e8157805a856cdcd0928956df78dbc731eca145b83ddd84\": container with ID starting with 4ba170d1942dd23d3e8157805a856cdcd0928956df78dbc731eca145b83ddd84 not found: ID does not exist" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.687725 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.737870 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4l2q9"] Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.759018 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4l2q9"] Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.920925 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-cwzlg"] Dec 12 16:11:02 crc kubenswrapper[4693]: E1212 16:11:02.921797 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="917b9605-32a3-4e61-9127-aff641344aa3" containerName="dnsmasq-dns" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.921920 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="917b9605-32a3-4e61-9127-aff641344aa3" containerName="dnsmasq-dns" Dec 12 16:11:02 crc kubenswrapper[4693]: E1212 16:11:02.922010 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="917b9605-32a3-4e61-9127-aff641344aa3" containerName="init" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.922221 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="917b9605-32a3-4e61-9127-aff641344aa3" containerName="init" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.922579 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="917b9605-32a3-4e61-9127-aff641344aa3" containerName="dnsmasq-dns" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.923624 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-cwzlg" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.956234 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-2thk6"] Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.968318 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.977346 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-cwzlg"] Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.989368 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-9g4v7"] Dec 12 16:11:02 crc kubenswrapper[4693]: I1212 16:11:02.991531 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-9g4v7" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.007641 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.011782 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4eab5418-24ec-488e-a57f-f35b117fa044-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-cwzlg\" (UID: \"4eab5418-24ec-488e-a57f-f35b117fa044\") " pod="openstack/mysqld-exporter-openstack-db-create-cwzlg" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.011844 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5lrz\" (UniqueName: \"kubernetes.io/projected/4eab5418-24ec-488e-a57f-f35b117fa044-kube-api-access-g5lrz\") pod \"mysqld-exporter-openstack-db-create-cwzlg\" (UID: \"4eab5418-24ec-488e-a57f-f35b117fa044\") " pod="openstack/mysqld-exporter-openstack-db-create-cwzlg" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.018454 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-9g4v7"] Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.039474 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-8580-account-create-update-nqj68"] Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.042260 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-8580-account-create-update-nqj68" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.049834 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.073069 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-8580-account-create-update-nqj68"] Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.073142 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.149663 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08f2fcab-af71-4ce3-b895-b65b0fea7736-operator-scripts\") pod \"mysqld-exporter-8580-account-create-update-nqj68\" (UID: \"08f2fcab-af71-4ce3-b895-b65b0fea7736\") " pod="openstack/mysqld-exporter-8580-account-create-update-nqj68" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.149922 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7656e7ff-beb0-46db-9955-05cf273af8fc-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-9g4v7\" (UID: \"7656e7ff-beb0-46db-9955-05cf273af8fc\") " pod="openstack/dnsmasq-dns-6c89d5d749-9g4v7" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.150072 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4eab5418-24ec-488e-a57f-f35b117fa044-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-cwzlg\" (UID: \"4eab5418-24ec-488e-a57f-f35b117fa044\") " pod="openstack/mysqld-exporter-openstack-db-create-cwzlg" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.150134 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs5gf\" (UniqueName: \"kubernetes.io/projected/7656e7ff-beb0-46db-9955-05cf273af8fc-kube-api-access-gs5gf\") pod \"dnsmasq-dns-6c89d5d749-9g4v7\" (UID: \"7656e7ff-beb0-46db-9955-05cf273af8fc\") " pod="openstack/dnsmasq-dns-6c89d5d749-9g4v7" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.150169 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5lrz\" (UniqueName: \"kubernetes.io/projected/4eab5418-24ec-488e-a57f-f35b117fa044-kube-api-access-g5lrz\") pod \"mysqld-exporter-openstack-db-create-cwzlg\" (UID: \"4eab5418-24ec-488e-a57f-f35b117fa044\") " pod="openstack/mysqld-exporter-openstack-db-create-cwzlg" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.150217 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7656e7ff-beb0-46db-9955-05cf273af8fc-config\") pod \"dnsmasq-dns-6c89d5d749-9g4v7\" (UID: \"7656e7ff-beb0-46db-9955-05cf273af8fc\") " pod="openstack/dnsmasq-dns-6c89d5d749-9g4v7" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.150252 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7656e7ff-beb0-46db-9955-05cf273af8fc-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-9g4v7\" (UID: \"7656e7ff-beb0-46db-9955-05cf273af8fc\") " pod="openstack/dnsmasq-dns-6c89d5d749-9g4v7" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.150374 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz7xc\" (UniqueName: \"kubernetes.io/projected/08f2fcab-af71-4ce3-b895-b65b0fea7736-kube-api-access-mz7xc\") pod \"mysqld-exporter-8580-account-create-update-nqj68\" (UID: \"08f2fcab-af71-4ce3-b895-b65b0fea7736\") " pod="openstack/mysqld-exporter-8580-account-create-update-nqj68" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.152065 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4eab5418-24ec-488e-a57f-f35b117fa044-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-cwzlg\" (UID: \"4eab5418-24ec-488e-a57f-f35b117fa044\") " pod="openstack/mysqld-exporter-openstack-db-create-cwzlg" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.204491 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5lrz\" (UniqueName: \"kubernetes.io/projected/4eab5418-24ec-488e-a57f-f35b117fa044-kube-api-access-g5lrz\") pod \"mysqld-exporter-openstack-db-create-cwzlg\" (UID: \"4eab5418-24ec-488e-a57f-f35b117fa044\") " pod="openstack/mysqld-exporter-openstack-db-create-cwzlg" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.263015 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08f2fcab-af71-4ce3-b895-b65b0fea7736-operator-scripts\") pod \"mysqld-exporter-8580-account-create-update-nqj68\" (UID: \"08f2fcab-af71-4ce3-b895-b65b0fea7736\") " pod="openstack/mysqld-exporter-8580-account-create-update-nqj68" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.263877 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7656e7ff-beb0-46db-9955-05cf273af8fc-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-9g4v7\" (UID: \"7656e7ff-beb0-46db-9955-05cf273af8fc\") " pod="openstack/dnsmasq-dns-6c89d5d749-9g4v7" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.265039 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7656e7ff-beb0-46db-9955-05cf273af8fc-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-9g4v7\" (UID: \"7656e7ff-beb0-46db-9955-05cf273af8fc\") " pod="openstack/dnsmasq-dns-6c89d5d749-9g4v7" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.264370 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08f2fcab-af71-4ce3-b895-b65b0fea7736-operator-scripts\") pod \"mysqld-exporter-8580-account-create-update-nqj68\" (UID: \"08f2fcab-af71-4ce3-b895-b65b0fea7736\") " pod="openstack/mysqld-exporter-8580-account-create-update-nqj68" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.265427 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs5gf\" (UniqueName: \"kubernetes.io/projected/7656e7ff-beb0-46db-9955-05cf273af8fc-kube-api-access-gs5gf\") pod \"dnsmasq-dns-6c89d5d749-9g4v7\" (UID: \"7656e7ff-beb0-46db-9955-05cf273af8fc\") " pod="openstack/dnsmasq-dns-6c89d5d749-9g4v7" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.265584 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7656e7ff-beb0-46db-9955-05cf273af8fc-config\") pod \"dnsmasq-dns-6c89d5d749-9g4v7\" (UID: \"7656e7ff-beb0-46db-9955-05cf273af8fc\") " pod="openstack/dnsmasq-dns-6c89d5d749-9g4v7" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.266519 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7656e7ff-beb0-46db-9955-05cf273af8fc-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-9g4v7\" (UID: \"7656e7ff-beb0-46db-9955-05cf273af8fc\") " pod="openstack/dnsmasq-dns-6c89d5d749-9g4v7" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.266583 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz7xc\" (UniqueName: \"kubernetes.io/projected/08f2fcab-af71-4ce3-b895-b65b0fea7736-kube-api-access-mz7xc\") pod \"mysqld-exporter-8580-account-create-update-nqj68\" (UID: \"08f2fcab-af71-4ce3-b895-b65b0fea7736\") " pod="openstack/mysqld-exporter-8580-account-create-update-nqj68" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.269322 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-j9m7j"] Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.271454 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7656e7ff-beb0-46db-9955-05cf273af8fc-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-9g4v7\" (UID: \"7656e7ff-beb0-46db-9955-05cf273af8fc\") " pod="openstack/dnsmasq-dns-6c89d5d749-9g4v7" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.272803 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7656e7ff-beb0-46db-9955-05cf273af8fc-config\") pod \"dnsmasq-dns-6c89d5d749-9g4v7\" (UID: \"7656e7ff-beb0-46db-9955-05cf273af8fc\") " pod="openstack/dnsmasq-dns-6c89d5d749-9g4v7" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.273464 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-j9m7j" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.277168 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.293750 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-cwzlg" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.295904 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs5gf\" (UniqueName: \"kubernetes.io/projected/7656e7ff-beb0-46db-9955-05cf273af8fc-kube-api-access-gs5gf\") pod \"dnsmasq-dns-6c89d5d749-9g4v7\" (UID: \"7656e7ff-beb0-46db-9955-05cf273af8fc\") " pod="openstack/dnsmasq-dns-6c89d5d749-9g4v7" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.326877 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-j9m7j"] Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.343807 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-9g4v7" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.361766 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz7xc\" (UniqueName: \"kubernetes.io/projected/08f2fcab-af71-4ce3-b895-b65b0fea7736-kube-api-access-mz7xc\") pod \"mysqld-exporter-8580-account-create-update-nqj68\" (UID: \"08f2fcab-af71-4ce3-b895-b65b0fea7736\") " pod="openstack/mysqld-exporter-8580-account-create-update-nqj68" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.393387 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-8580-account-create-update-nqj68" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.428378 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="917b9605-32a3-4e61-9127-aff641344aa3" path="/var/lib/kubelet/pods/917b9605-32a3-4e61-9127-aff641344aa3/volumes" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.429395 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.432815 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.432846 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-9g4v7"] Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.432865 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-7z76l"] Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.434233 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.435090 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7z76l" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.439052 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.441072 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.441605 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-csv8b" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.441232 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.443059 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7z76l"] Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.449556 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.472121 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2hzf\" (UniqueName: \"kubernetes.io/projected/9bcffb4b-6142-4b31-8403-6b00de7fd93a-kube-api-access-q2hzf\") pod \"ovn-controller-metrics-j9m7j\" (UID: \"9bcffb4b-6142-4b31-8403-6b00de7fd93a\") " pod="openstack/ovn-controller-metrics-j9m7j" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.472185 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bcffb4b-6142-4b31-8403-6b00de7fd93a-config\") pod \"ovn-controller-metrics-j9m7j\" (UID: \"9bcffb4b-6142-4b31-8403-6b00de7fd93a\") " pod="openstack/ovn-controller-metrics-j9m7j" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.472216 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bcffb4b-6142-4b31-8403-6b00de7fd93a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-j9m7j\" (UID: \"9bcffb4b-6142-4b31-8403-6b00de7fd93a\") " pod="openstack/ovn-controller-metrics-j9m7j" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.472295 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9bcffb4b-6142-4b31-8403-6b00de7fd93a-ovn-rundir\") pod \"ovn-controller-metrics-j9m7j\" (UID: \"9bcffb4b-6142-4b31-8403-6b00de7fd93a\") " pod="openstack/ovn-controller-metrics-j9m7j" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.472314 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bcffb4b-6142-4b31-8403-6b00de7fd93a-combined-ca-bundle\") pod \"ovn-controller-metrics-j9m7j\" (UID: \"9bcffb4b-6142-4b31-8403-6b00de7fd93a\") " pod="openstack/ovn-controller-metrics-j9m7j" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.472360 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9bcffb4b-6142-4b31-8403-6b00de7fd93a-ovs-rundir\") pod \"ovn-controller-metrics-j9m7j\" (UID: \"9bcffb4b-6142-4b31-8403-6b00de7fd93a\") " pod="openstack/ovn-controller-metrics-j9m7j" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.579201 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9bcffb4b-6142-4b31-8403-6b00de7fd93a-ovn-rundir\") pod \"ovn-controller-metrics-j9m7j\" (UID: \"9bcffb4b-6142-4b31-8403-6b00de7fd93a\") " pod="openstack/ovn-controller-metrics-j9m7j" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.579251 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bcffb4b-6142-4b31-8403-6b00de7fd93a-combined-ca-bundle\") pod \"ovn-controller-metrics-j9m7j\" (UID: \"9bcffb4b-6142-4b31-8403-6b00de7fd93a\") " pod="openstack/ovn-controller-metrics-j9m7j" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.579293 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7z76l\" (UID: \"52f33022-5f32-4ca6-bc92-5753d41cd038\") " pod="openstack/dnsmasq-dns-698758b865-7z76l" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.579315 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-dns-svc\") pod \"dnsmasq-dns-698758b865-7z76l\" (UID: \"52f33022-5f32-4ca6-bc92-5753d41cd038\") " pod="openstack/dnsmasq-dns-698758b865-7z76l" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.579358 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9bcffb4b-6142-4b31-8403-6b00de7fd93a-ovs-rundir\") pod \"ovn-controller-metrics-j9m7j\" (UID: \"9bcffb4b-6142-4b31-8403-6b00de7fd93a\") " pod="openstack/ovn-controller-metrics-j9m7j" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.579386 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/215ca5de-ce9f-4370-8aff-715dd1e384a3-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"215ca5de-ce9f-4370-8aff-715dd1e384a3\") " pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.579496 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-config\") pod \"dnsmasq-dns-698758b865-7z76l\" (UID: \"52f33022-5f32-4ca6-bc92-5753d41cd038\") " pod="openstack/dnsmasq-dns-698758b865-7z76l" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.579524 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/215ca5de-ce9f-4370-8aff-715dd1e384a3-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"215ca5de-ce9f-4370-8aff-715dd1e384a3\") " pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.579549 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/215ca5de-ce9f-4370-8aff-715dd1e384a3-config\") pod \"ovn-northd-0\" (UID: \"215ca5de-ce9f-4370-8aff-715dd1e384a3\") " pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.579597 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2hzf\" (UniqueName: \"kubernetes.io/projected/9bcffb4b-6142-4b31-8403-6b00de7fd93a-kube-api-access-q2hzf\") pod \"ovn-controller-metrics-j9m7j\" (UID: \"9bcffb4b-6142-4b31-8403-6b00de7fd93a\") " pod="openstack/ovn-controller-metrics-j9m7j" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.579619 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215ca5de-ce9f-4370-8aff-715dd1e384a3-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"215ca5de-ce9f-4370-8aff-715dd1e384a3\") " pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.579641 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8lgs\" (UniqueName: \"kubernetes.io/projected/215ca5de-ce9f-4370-8aff-715dd1e384a3-kube-api-access-v8lgs\") pod \"ovn-northd-0\" (UID: \"215ca5de-ce9f-4370-8aff-715dd1e384a3\") " pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.579672 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/215ca5de-ce9f-4370-8aff-715dd1e384a3-scripts\") pod \"ovn-northd-0\" (UID: \"215ca5de-ce9f-4370-8aff-715dd1e384a3\") " pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.579696 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bcffb4b-6142-4b31-8403-6b00de7fd93a-config\") pod \"ovn-controller-metrics-j9m7j\" (UID: \"9bcffb4b-6142-4b31-8403-6b00de7fd93a\") " pod="openstack/ovn-controller-metrics-j9m7j" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.579717 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7z76l\" (UID: \"52f33022-5f32-4ca6-bc92-5753d41cd038\") " pod="openstack/dnsmasq-dns-698758b865-7z76l" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.579742 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bcffb4b-6142-4b31-8403-6b00de7fd93a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-j9m7j\" (UID: \"9bcffb4b-6142-4b31-8403-6b00de7fd93a\") " pod="openstack/ovn-controller-metrics-j9m7j" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.579766 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cxtq\" (UniqueName: \"kubernetes.io/projected/52f33022-5f32-4ca6-bc92-5753d41cd038-kube-api-access-5cxtq\") pod \"dnsmasq-dns-698758b865-7z76l\" (UID: \"52f33022-5f32-4ca6-bc92-5753d41cd038\") " pod="openstack/dnsmasq-dns-698758b865-7z76l" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.579829 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/215ca5de-ce9f-4370-8aff-715dd1e384a3-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"215ca5de-ce9f-4370-8aff-715dd1e384a3\") " pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.579809 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9bcffb4b-6142-4b31-8403-6b00de7fd93a-ovn-rundir\") pod \"ovn-controller-metrics-j9m7j\" (UID: \"9bcffb4b-6142-4b31-8403-6b00de7fd93a\") " pod="openstack/ovn-controller-metrics-j9m7j" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.579951 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9bcffb4b-6142-4b31-8403-6b00de7fd93a-ovs-rundir\") pod \"ovn-controller-metrics-j9m7j\" (UID: \"9bcffb4b-6142-4b31-8403-6b00de7fd93a\") " pod="openstack/ovn-controller-metrics-j9m7j" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.581001 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bcffb4b-6142-4b31-8403-6b00de7fd93a-config\") pod \"ovn-controller-metrics-j9m7j\" (UID: \"9bcffb4b-6142-4b31-8403-6b00de7fd93a\") " pod="openstack/ovn-controller-metrics-j9m7j" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.598906 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bcffb4b-6142-4b31-8403-6b00de7fd93a-combined-ca-bundle\") pod \"ovn-controller-metrics-j9m7j\" (UID: \"9bcffb4b-6142-4b31-8403-6b00de7fd93a\") " pod="openstack/ovn-controller-metrics-j9m7j" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.599023 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" event={"ID":"c229b29c-37d6-4d57-b983-9093c267bdef","Type":"ContainerStarted","Data":"84a1d0b1782b934a006b4d49dd57c67ed374d764caeee8eb5d84d8a09b82a882"} Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.607935 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.619392 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bcffb4b-6142-4b31-8403-6b00de7fd93a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-j9m7j\" (UID: \"9bcffb4b-6142-4b31-8403-6b00de7fd93a\") " pod="openstack/ovn-controller-metrics-j9m7j" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.623824 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2hzf\" (UniqueName: \"kubernetes.io/projected/9bcffb4b-6142-4b31-8403-6b00de7fd93a-kube-api-access-q2hzf\") pod \"ovn-controller-metrics-j9m7j\" (UID: \"9bcffb4b-6142-4b31-8403-6b00de7fd93a\") " pod="openstack/ovn-controller-metrics-j9m7j" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.656775 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" podStartSLOduration=10.656746144 podStartE2EDuration="10.656746144s" podCreationTimestamp="2025-12-12 16:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:11:03.627921199 +0000 UTC m=+1490.796560800" watchObservedRunningTime="2025-12-12 16:11:03.656746144 +0000 UTC m=+1490.825385745" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.683392 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/215ca5de-ce9f-4370-8aff-715dd1e384a3-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"215ca5de-ce9f-4370-8aff-715dd1e384a3\") " pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.683617 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7z76l\" (UID: \"52f33022-5f32-4ca6-bc92-5753d41cd038\") " pod="openstack/dnsmasq-dns-698758b865-7z76l" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.683883 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/215ca5de-ce9f-4370-8aff-715dd1e384a3-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"215ca5de-ce9f-4370-8aff-715dd1e384a3\") " pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.684516 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7z76l\" (UID: \"52f33022-5f32-4ca6-bc92-5753d41cd038\") " pod="openstack/dnsmasq-dns-698758b865-7z76l" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.683650 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-dns-svc\") pod \"dnsmasq-dns-698758b865-7z76l\" (UID: \"52f33022-5f32-4ca6-bc92-5753d41cd038\") " pod="openstack/dnsmasq-dns-698758b865-7z76l" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.684654 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-dns-svc\") pod \"dnsmasq-dns-698758b865-7z76l\" (UID: \"52f33022-5f32-4ca6-bc92-5753d41cd038\") " pod="openstack/dnsmasq-dns-698758b865-7z76l" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.688502 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/215ca5de-ce9f-4370-8aff-715dd1e384a3-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"215ca5de-ce9f-4370-8aff-715dd1e384a3\") " pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.688677 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-config\") pod \"dnsmasq-dns-698758b865-7z76l\" (UID: \"52f33022-5f32-4ca6-bc92-5753d41cd038\") " pod="openstack/dnsmasq-dns-698758b865-7z76l" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.688739 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/215ca5de-ce9f-4370-8aff-715dd1e384a3-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"215ca5de-ce9f-4370-8aff-715dd1e384a3\") " pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.688840 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/215ca5de-ce9f-4370-8aff-715dd1e384a3-config\") pod \"ovn-northd-0\" (UID: \"215ca5de-ce9f-4370-8aff-715dd1e384a3\") " pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.688879 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215ca5de-ce9f-4370-8aff-715dd1e384a3-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"215ca5de-ce9f-4370-8aff-715dd1e384a3\") " pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.688932 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8lgs\" (UniqueName: \"kubernetes.io/projected/215ca5de-ce9f-4370-8aff-715dd1e384a3-kube-api-access-v8lgs\") pod \"ovn-northd-0\" (UID: \"215ca5de-ce9f-4370-8aff-715dd1e384a3\") " pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.688995 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/215ca5de-ce9f-4370-8aff-715dd1e384a3-scripts\") pod \"ovn-northd-0\" (UID: \"215ca5de-ce9f-4370-8aff-715dd1e384a3\") " pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.689091 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7z76l\" (UID: \"52f33022-5f32-4ca6-bc92-5753d41cd038\") " pod="openstack/dnsmasq-dns-698758b865-7z76l" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.689186 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cxtq\" (UniqueName: \"kubernetes.io/projected/52f33022-5f32-4ca6-bc92-5753d41cd038-kube-api-access-5cxtq\") pod \"dnsmasq-dns-698758b865-7z76l\" (UID: \"52f33022-5f32-4ca6-bc92-5753d41cd038\") " pod="openstack/dnsmasq-dns-698758b865-7z76l" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.692157 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/215ca5de-ce9f-4370-8aff-715dd1e384a3-scripts\") pod \"ovn-northd-0\" (UID: \"215ca5de-ce9f-4370-8aff-715dd1e384a3\") " pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.696870 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-config\") pod \"dnsmasq-dns-698758b865-7z76l\" (UID: \"52f33022-5f32-4ca6-bc92-5753d41cd038\") " pod="openstack/dnsmasq-dns-698758b865-7z76l" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.697838 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/215ca5de-ce9f-4370-8aff-715dd1e384a3-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"215ca5de-ce9f-4370-8aff-715dd1e384a3\") " pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.703354 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7z76l\" (UID: \"52f33022-5f32-4ca6-bc92-5753d41cd038\") " pod="openstack/dnsmasq-dns-698758b865-7z76l" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.704716 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/215ca5de-ce9f-4370-8aff-715dd1e384a3-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"215ca5de-ce9f-4370-8aff-715dd1e384a3\") " pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.708188 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215ca5de-ce9f-4370-8aff-715dd1e384a3-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"215ca5de-ce9f-4370-8aff-715dd1e384a3\") " pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.716446 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8lgs\" (UniqueName: \"kubernetes.io/projected/215ca5de-ce9f-4370-8aff-715dd1e384a3-kube-api-access-v8lgs\") pod \"ovn-northd-0\" (UID: \"215ca5de-ce9f-4370-8aff-715dd1e384a3\") " pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.718304 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cxtq\" (UniqueName: \"kubernetes.io/projected/52f33022-5f32-4ca6-bc92-5753d41cd038-kube-api-access-5cxtq\") pod \"dnsmasq-dns-698758b865-7z76l\" (UID: \"52f33022-5f32-4ca6-bc92-5753d41cd038\") " pod="openstack/dnsmasq-dns-698758b865-7z76l" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.723427 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/215ca5de-ce9f-4370-8aff-715dd1e384a3-config\") pod \"ovn-northd-0\" (UID: \"215ca5de-ce9f-4370-8aff-715dd1e384a3\") " pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.811124 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-j9m7j" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.839590 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 12 16:11:03 crc kubenswrapper[4693]: I1212 16:11:03.851990 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7z76l" Dec 12 16:11:04 crc kubenswrapper[4693]: I1212 16:11:04.065061 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-cwzlg"] Dec 12 16:11:04 crc kubenswrapper[4693]: I1212 16:11:04.129018 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-8580-account-create-update-nqj68"] Dec 12 16:11:04 crc kubenswrapper[4693]: W1212 16:11:04.129184 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4eab5418_24ec_488e_a57f_f35b117fa044.slice/crio-3e9686de7008d17a1ab146e80d42adfe30837c7f98c8d4cf295bbf99a34d79a4 WatchSource:0}: Error finding container 3e9686de7008d17a1ab146e80d42adfe30837c7f98c8d4cf295bbf99a34d79a4: Status 404 returned error can't find the container with id 3e9686de7008d17a1ab146e80d42adfe30837c7f98c8d4cf295bbf99a34d79a4 Dec 12 16:11:04 crc kubenswrapper[4693]: W1212 16:11:04.133606 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08f2fcab_af71_4ce3_b895_b65b0fea7736.slice/crio-e6b69acb33eca3a4c3f48c041d3058f696d4f86c0cb61b3d811d6b0bc2bbe9fd WatchSource:0}: Error finding container e6b69acb33eca3a4c3f48c041d3058f696d4f86c0cb61b3d811d6b0bc2bbe9fd: Status 404 returned error can't find the container with id e6b69acb33eca3a4c3f48c041d3058f696d4f86c0cb61b3d811d6b0bc2bbe9fd Dec 12 16:11:04 crc kubenswrapper[4693]: I1212 16:11:04.263807 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-9g4v7"] Dec 12 16:11:04 crc kubenswrapper[4693]: W1212 16:11:04.265319 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7656e7ff_beb0_46db_9955_05cf273af8fc.slice/crio-2e4ea47b63d4c53d797adc0805a75ea0640dc476fdb0a53ddfdc6325d1cb5f3f WatchSource:0}: Error finding container 2e4ea47b63d4c53d797adc0805a75ea0640dc476fdb0a53ddfdc6325d1cb5f3f: Status 404 returned error can't find the container with id 2e4ea47b63d4c53d797adc0805a75ea0640dc476fdb0a53ddfdc6325d1cb5f3f Dec 12 16:11:04 crc kubenswrapper[4693]: I1212 16:11:04.511567 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-j9m7j"] Dec 12 16:11:04 crc kubenswrapper[4693]: I1212 16:11:04.527947 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7z76l"] Dec 12 16:11:04 crc kubenswrapper[4693]: I1212 16:11:04.540895 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 12 16:11:04 crc kubenswrapper[4693]: I1212 16:11:04.615698 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-8580-account-create-update-nqj68" event={"ID":"08f2fcab-af71-4ce3-b895-b65b0fea7736","Type":"ContainerStarted","Data":"866f8d3f52abbb76967f5e62e0237bf0d1647a6b636546633de982b63bc59a69"} Dec 12 16:11:04 crc kubenswrapper[4693]: I1212 16:11:04.615751 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-8580-account-create-update-nqj68" event={"ID":"08f2fcab-af71-4ce3-b895-b65b0fea7736","Type":"ContainerStarted","Data":"e6b69acb33eca3a4c3f48c041d3058f696d4f86c0cb61b3d811d6b0bc2bbe9fd"} Dec 12 16:11:04 crc kubenswrapper[4693]: I1212 16:11:04.619803 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-9g4v7" event={"ID":"7656e7ff-beb0-46db-9955-05cf273af8fc","Type":"ContainerStarted","Data":"2e4ea47b63d4c53d797adc0805a75ea0640dc476fdb0a53ddfdc6325d1cb5f3f"} Dec 12 16:11:04 crc kubenswrapper[4693]: I1212 16:11:04.623085 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-cwzlg" event={"ID":"4eab5418-24ec-488e-a57f-f35b117fa044","Type":"ContainerStarted","Data":"a975614ad602e60df12329bae171b3e7306adc87bdff99860a067673c85b6978"} Dec 12 16:11:04 crc kubenswrapper[4693]: I1212 16:11:04.623116 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-cwzlg" event={"ID":"4eab5418-24ec-488e-a57f-f35b117fa044","Type":"ContainerStarted","Data":"3e9686de7008d17a1ab146e80d42adfe30837c7f98c8d4cf295bbf99a34d79a4"} Dec 12 16:11:04 crc kubenswrapper[4693]: I1212 16:11:04.623304 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" podUID="c229b29c-37d6-4d57-b983-9093c267bdef" containerName="dnsmasq-dns" containerID="cri-o://84a1d0b1782b934a006b4d49dd57c67ed374d764caeee8eb5d84d8a09b82a882" gracePeriod=10 Dec 12 16:11:04 crc kubenswrapper[4693]: I1212 16:11:04.642192 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-8580-account-create-update-nqj68" podStartSLOduration=1.642171182 podStartE2EDuration="1.642171182s" podCreationTimestamp="2025-12-12 16:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:11:04.63316118 +0000 UTC m=+1491.801800791" watchObservedRunningTime="2025-12-12 16:11:04.642171182 +0000 UTC m=+1491.810810783" Dec 12 16:11:04 crc kubenswrapper[4693]: I1212 16:11:04.652549 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-cwzlg" podStartSLOduration=2.652530371 podStartE2EDuration="2.652530371s" podCreationTimestamp="2025-12-12 16:11:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:11:04.646959041 +0000 UTC m=+1491.815598642" watchObservedRunningTime="2025-12-12 16:11:04.652530371 +0000 UTC m=+1491.821169962" Dec 12 16:11:05 crc kubenswrapper[4693]: I1212 16:11:05.639692 4693 generic.go:334] "Generic (PLEG): container finished" podID="c229b29c-37d6-4d57-b983-9093c267bdef" containerID="84a1d0b1782b934a006b4d49dd57c67ed374d764caeee8eb5d84d8a09b82a882" exitCode=0 Dec 12 16:11:05 crc kubenswrapper[4693]: I1212 16:11:05.639772 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" event={"ID":"c229b29c-37d6-4d57-b983-9093c267bdef","Type":"ContainerDied","Data":"84a1d0b1782b934a006b4d49dd57c67ed374d764caeee8eb5d84d8a09b82a882"} Dec 12 16:11:05 crc kubenswrapper[4693]: I1212 16:11:05.642252 4693 generic.go:334] "Generic (PLEG): container finished" podID="08f2fcab-af71-4ce3-b895-b65b0fea7736" containerID="866f8d3f52abbb76967f5e62e0237bf0d1647a6b636546633de982b63bc59a69" exitCode=0 Dec 12 16:11:05 crc kubenswrapper[4693]: I1212 16:11:05.642310 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-8580-account-create-update-nqj68" event={"ID":"08f2fcab-af71-4ce3-b895-b65b0fea7736","Type":"ContainerDied","Data":"866f8d3f52abbb76967f5e62e0237bf0d1647a6b636546633de982b63bc59a69"} Dec 12 16:11:05 crc kubenswrapper[4693]: I1212 16:11:05.645083 4693 generic.go:334] "Generic (PLEG): container finished" podID="4eab5418-24ec-488e-a57f-f35b117fa044" containerID="a975614ad602e60df12329bae171b3e7306adc87bdff99860a067673c85b6978" exitCode=0 Dec 12 16:11:05 crc kubenswrapper[4693]: I1212 16:11:05.645120 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-cwzlg" event={"ID":"4eab5418-24ec-488e-a57f-f35b117fa044","Type":"ContainerDied","Data":"a975614ad602e60df12329bae171b3e7306adc87bdff99860a067673c85b6978"} Dec 12 16:11:08 crc kubenswrapper[4693]: I1212 16:11:08.410745 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" podUID="c229b29c-37d6-4d57-b983-9093c267bdef" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: connect: connection refused" Dec 12 16:11:09 crc kubenswrapper[4693]: W1212 16:11:09.643851 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod215ca5de_ce9f_4370_8aff_715dd1e384a3.slice/crio-55187171bb4873cd0b1063ef7ea4cf948d5e4fc92e617e288a38b41da8e9ff3a WatchSource:0}: Error finding container 55187171bb4873cd0b1063ef7ea4cf948d5e4fc92e617e288a38b41da8e9ff3a: Status 404 returned error can't find the container with id 55187171bb4873cd0b1063ef7ea4cf948d5e4fc92e617e288a38b41da8e9ff3a Dec 12 16:11:09 crc kubenswrapper[4693]: I1212 16:11:09.711347 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-j9m7j" event={"ID":"9bcffb4b-6142-4b31-8403-6b00de7fd93a","Type":"ContainerStarted","Data":"5405c61e2dc4ae5bf949af1c8bd8c6dc1c6ae2fccdb6ea5a40fd87de65952cf6"} Dec 12 16:11:09 crc kubenswrapper[4693]: I1212 16:11:09.713323 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-8580-account-create-update-nqj68" event={"ID":"08f2fcab-af71-4ce3-b895-b65b0fea7736","Type":"ContainerDied","Data":"e6b69acb33eca3a4c3f48c041d3058f696d4f86c0cb61b3d811d6b0bc2bbe9fd"} Dec 12 16:11:09 crc kubenswrapper[4693]: I1212 16:11:09.713397 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6b69acb33eca3a4c3f48c041d3058f696d4f86c0cb61b3d811d6b0bc2bbe9fd" Dec 12 16:11:09 crc kubenswrapper[4693]: I1212 16:11:09.719017 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-cwzlg" event={"ID":"4eab5418-24ec-488e-a57f-f35b117fa044","Type":"ContainerDied","Data":"3e9686de7008d17a1ab146e80d42adfe30837c7f98c8d4cf295bbf99a34d79a4"} Dec 12 16:11:09 crc kubenswrapper[4693]: I1212 16:11:09.719071 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e9686de7008d17a1ab146e80d42adfe30837c7f98c8d4cf295bbf99a34d79a4" Dec 12 16:11:09 crc kubenswrapper[4693]: I1212 16:11:09.722500 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"215ca5de-ce9f-4370-8aff-715dd1e384a3","Type":"ContainerStarted","Data":"55187171bb4873cd0b1063ef7ea4cf948d5e4fc92e617e288a38b41da8e9ff3a"} Dec 12 16:11:09 crc kubenswrapper[4693]: I1212 16:11:09.726869 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7z76l" event={"ID":"52f33022-5f32-4ca6-bc92-5753d41cd038","Type":"ContainerStarted","Data":"57fa5f06934eee02a066527d644e387576a36634299f0fe2f7446cde271318ec"} Dec 12 16:11:09 crc kubenswrapper[4693]: I1212 16:11:09.884466 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-cwzlg" Dec 12 16:11:09 crc kubenswrapper[4693]: I1212 16:11:09.897194 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-8580-account-create-update-nqj68" Dec 12 16:11:09 crc kubenswrapper[4693]: I1212 16:11:09.961161 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-mbsvw"] Dec 12 16:11:09 crc kubenswrapper[4693]: E1212 16:11:09.961724 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08f2fcab-af71-4ce3-b895-b65b0fea7736" containerName="mariadb-account-create-update" Dec 12 16:11:09 crc kubenswrapper[4693]: I1212 16:11:09.961753 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="08f2fcab-af71-4ce3-b895-b65b0fea7736" containerName="mariadb-account-create-update" Dec 12 16:11:09 crc kubenswrapper[4693]: E1212 16:11:09.961772 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eab5418-24ec-488e-a57f-f35b117fa044" containerName="mariadb-database-create" Dec 12 16:11:09 crc kubenswrapper[4693]: I1212 16:11:09.961809 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eab5418-24ec-488e-a57f-f35b117fa044" containerName="mariadb-database-create" Dec 12 16:11:09 crc kubenswrapper[4693]: I1212 16:11:09.962072 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eab5418-24ec-488e-a57f-f35b117fa044" containerName="mariadb-database-create" Dec 12 16:11:09 crc kubenswrapper[4693]: I1212 16:11:09.962089 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="08f2fcab-af71-4ce3-b895-b65b0fea7736" containerName="mariadb-account-create-update" Dec 12 16:11:09 crc kubenswrapper[4693]: I1212 16:11:09.963041 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mbsvw" Dec 12 16:11:09 crc kubenswrapper[4693]: I1212 16:11:09.974538 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mbsvw"] Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.044679 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08f2fcab-af71-4ce3-b895-b65b0fea7736-operator-scripts\") pod \"08f2fcab-af71-4ce3-b895-b65b0fea7736\" (UID: \"08f2fcab-af71-4ce3-b895-b65b0fea7736\") " Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.045048 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5lrz\" (UniqueName: \"kubernetes.io/projected/4eab5418-24ec-488e-a57f-f35b117fa044-kube-api-access-g5lrz\") pod \"4eab5418-24ec-488e-a57f-f35b117fa044\" (UID: \"4eab5418-24ec-488e-a57f-f35b117fa044\") " Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.045135 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz7xc\" (UniqueName: \"kubernetes.io/projected/08f2fcab-af71-4ce3-b895-b65b0fea7736-kube-api-access-mz7xc\") pod \"08f2fcab-af71-4ce3-b895-b65b0fea7736\" (UID: \"08f2fcab-af71-4ce3-b895-b65b0fea7736\") " Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.045343 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4eab5418-24ec-488e-a57f-f35b117fa044-operator-scripts\") pod \"4eab5418-24ec-488e-a57f-f35b117fa044\" (UID: \"4eab5418-24ec-488e-a57f-f35b117fa044\") " Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.045832 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-etc-swift\") pod \"swift-storage-0\" (UID: \"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c\") " pod="openstack/swift-storage-0" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.046042 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08f2fcab-af71-4ce3-b895-b65b0fea7736-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08f2fcab-af71-4ce3-b895-b65b0fea7736" (UID: "08f2fcab-af71-4ce3-b895-b65b0fea7736"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:10 crc kubenswrapper[4693]: E1212 16:11:10.046180 4693 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 12 16:11:10 crc kubenswrapper[4693]: E1212 16:11:10.046207 4693 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 12 16:11:10 crc kubenswrapper[4693]: E1212 16:11:10.046370 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-etc-swift podName:39fd15fe-bbdd-49d4-95cc-70049f5b8d3c nodeName:}" failed. No retries permitted until 2025-12-12 16:11:26.046260101 +0000 UTC m=+1513.214899702 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-etc-swift") pod "swift-storage-0" (UID: "39fd15fe-bbdd-49d4-95cc-70049f5b8d3c") : configmap "swift-ring-files" not found Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.049443 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eab5418-24ec-488e-a57f-f35b117fa044-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4eab5418-24ec-488e-a57f-f35b117fa044" (UID: "4eab5418-24ec-488e-a57f-f35b117fa044"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.051150 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eab5418-24ec-488e-a57f-f35b117fa044-kube-api-access-g5lrz" (OuterVolumeSpecName: "kube-api-access-g5lrz") pod "4eab5418-24ec-488e-a57f-f35b117fa044" (UID: "4eab5418-24ec-488e-a57f-f35b117fa044"). InnerVolumeSpecName "kube-api-access-g5lrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.055015 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08f2fcab-af71-4ce3-b895-b65b0fea7736-kube-api-access-mz7xc" (OuterVolumeSpecName: "kube-api-access-mz7xc") pod "08f2fcab-af71-4ce3-b895-b65b0fea7736" (UID: "08f2fcab-af71-4ce3-b895-b65b0fea7736"). InnerVolumeSpecName "kube-api-access-mz7xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.060208 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-74dd-account-create-update-5h6zk"] Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.061820 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74dd-account-create-update-5h6zk" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.066031 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.080069 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74dd-account-create-update-5h6zk"] Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.148531 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be946e65-78d9-4dd1-9a59-a290c6ae0f76-operator-scripts\") pod \"keystone-db-create-mbsvw\" (UID: \"be946e65-78d9-4dd1-9a59-a290c6ae0f76\") " pod="openstack/keystone-db-create-mbsvw" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.148912 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ghwj\" (UniqueName: \"kubernetes.io/projected/be946e65-78d9-4dd1-9a59-a290c6ae0f76-kube-api-access-7ghwj\") pod \"keystone-db-create-mbsvw\" (UID: \"be946e65-78d9-4dd1-9a59-a290c6ae0f76\") " pod="openstack/keystone-db-create-mbsvw" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.149078 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4eab5418-24ec-488e-a57f-f35b117fa044-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.149147 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08f2fcab-af71-4ce3-b895-b65b0fea7736-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.149229 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5lrz\" (UniqueName: \"kubernetes.io/projected/4eab5418-24ec-488e-a57f-f35b117fa044-kube-api-access-g5lrz\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.149313 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz7xc\" (UniqueName: \"kubernetes.io/projected/08f2fcab-af71-4ce3-b895-b65b0fea7736-kube-api-access-mz7xc\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.225837 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5d7d4f8db7-5gfv7" podUID="c0e7e551-57f3-4891-972f-4532f2fd50c4" containerName="console" containerID="cri-o://b3b2d924529d3f214244ec426a386be6f5a6d98e91c3ab05a3b25ed525025ffc" gracePeriod=15 Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.251859 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ghwj\" (UniqueName: \"kubernetes.io/projected/be946e65-78d9-4dd1-9a59-a290c6ae0f76-kube-api-access-7ghwj\") pod \"keystone-db-create-mbsvw\" (UID: \"be946e65-78d9-4dd1-9a59-a290c6ae0f76\") " pod="openstack/keystone-db-create-mbsvw" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.251931 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be946e65-78d9-4dd1-9a59-a290c6ae0f76-operator-scripts\") pod \"keystone-db-create-mbsvw\" (UID: \"be946e65-78d9-4dd1-9a59-a290c6ae0f76\") " pod="openstack/keystone-db-create-mbsvw" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.251985 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea7a477-0eef-4e79-bd49-d4e152de7553-operator-scripts\") pod \"keystone-74dd-account-create-update-5h6zk\" (UID: \"0ea7a477-0eef-4e79-bd49-d4e152de7553\") " pod="openstack/keystone-74dd-account-create-update-5h6zk" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.252041 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8tpd\" (UniqueName: \"kubernetes.io/projected/0ea7a477-0eef-4e79-bd49-d4e152de7553-kube-api-access-c8tpd\") pod \"keystone-74dd-account-create-update-5h6zk\" (UID: \"0ea7a477-0eef-4e79-bd49-d4e152de7553\") " pod="openstack/keystone-74dd-account-create-update-5h6zk" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.252974 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be946e65-78d9-4dd1-9a59-a290c6ae0f76-operator-scripts\") pod \"keystone-db-create-mbsvw\" (UID: \"be946e65-78d9-4dd1-9a59-a290c6ae0f76\") " pod="openstack/keystone-db-create-mbsvw" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.254868 4693 patch_prober.go:28] interesting pod/console-5d7d4f8db7-5gfv7 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.89:8443/health\": dial tcp 10.217.0.89:8443: connect: connection refused" start-of-body= Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.254923 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5d7d4f8db7-5gfv7" podUID="c0e7e551-57f3-4891-972f-4532f2fd50c4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.89:8443/health\": dial tcp 10.217.0.89:8443: connect: connection refused" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.273685 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ghwj\" (UniqueName: \"kubernetes.io/projected/be946e65-78d9-4dd1-9a59-a290c6ae0f76-kube-api-access-7ghwj\") pod \"keystone-db-create-mbsvw\" (UID: \"be946e65-78d9-4dd1-9a59-a290c6ae0f76\") " pod="openstack/keystone-db-create-mbsvw" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.288398 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mbsvw" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.359967 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8tpd\" (UniqueName: \"kubernetes.io/projected/0ea7a477-0eef-4e79-bd49-d4e152de7553-kube-api-access-c8tpd\") pod \"keystone-74dd-account-create-update-5h6zk\" (UID: \"0ea7a477-0eef-4e79-bd49-d4e152de7553\") " pod="openstack/keystone-74dd-account-create-update-5h6zk" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.360424 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea7a477-0eef-4e79-bd49-d4e152de7553-operator-scripts\") pod \"keystone-74dd-account-create-update-5h6zk\" (UID: \"0ea7a477-0eef-4e79-bd49-d4e152de7553\") " pod="openstack/keystone-74dd-account-create-update-5h6zk" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.363938 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea7a477-0eef-4e79-bd49-d4e152de7553-operator-scripts\") pod \"keystone-74dd-account-create-update-5h6zk\" (UID: \"0ea7a477-0eef-4e79-bd49-d4e152de7553\") " pod="openstack/keystone-74dd-account-create-update-5h6zk" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.378662 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-nrjn8"] Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.378778 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8tpd\" (UniqueName: \"kubernetes.io/projected/0ea7a477-0eef-4e79-bd49-d4e152de7553-kube-api-access-c8tpd\") pod \"keystone-74dd-account-create-update-5h6zk\" (UID: \"0ea7a477-0eef-4e79-bd49-d4e152de7553\") " pod="openstack/keystone-74dd-account-create-update-5h6zk" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.380166 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nrjn8" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.412397 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nrjn8"] Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.438060 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74dd-account-create-update-5h6zk" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.468524 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b334154-7111-4b39-b0fc-ffb79a331506-operator-scripts\") pod \"placement-db-create-nrjn8\" (UID: \"3b334154-7111-4b39-b0fc-ffb79a331506\") " pod="openstack/placement-db-create-nrjn8" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.468896 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj67d\" (UniqueName: \"kubernetes.io/projected/3b334154-7111-4b39-b0fc-ffb79a331506-kube-api-access-tj67d\") pod \"placement-db-create-nrjn8\" (UID: \"3b334154-7111-4b39-b0fc-ffb79a331506\") " pod="openstack/placement-db-create-nrjn8" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.473852 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-ee38-account-create-update-7hnjl"] Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.475454 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ee38-account-create-update-7hnjl" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.483696 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.500055 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ee38-account-create-update-7hnjl"] Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.571508 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b334154-7111-4b39-b0fc-ffb79a331506-operator-scripts\") pod \"placement-db-create-nrjn8\" (UID: \"3b334154-7111-4b39-b0fc-ffb79a331506\") " pod="openstack/placement-db-create-nrjn8" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.571769 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzsn7\" (UniqueName: \"kubernetes.io/projected/00d3b68b-c9df-4c00-ae1f-079a98130251-kube-api-access-fzsn7\") pod \"placement-ee38-account-create-update-7hnjl\" (UID: \"00d3b68b-c9df-4c00-ae1f-079a98130251\") " pod="openstack/placement-ee38-account-create-update-7hnjl" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.571872 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00d3b68b-c9df-4c00-ae1f-079a98130251-operator-scripts\") pod \"placement-ee38-account-create-update-7hnjl\" (UID: \"00d3b68b-c9df-4c00-ae1f-079a98130251\") " pod="openstack/placement-ee38-account-create-update-7hnjl" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.571925 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj67d\" (UniqueName: \"kubernetes.io/projected/3b334154-7111-4b39-b0fc-ffb79a331506-kube-api-access-tj67d\") pod \"placement-db-create-nrjn8\" (UID: \"3b334154-7111-4b39-b0fc-ffb79a331506\") " pod="openstack/placement-db-create-nrjn8" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.572621 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b334154-7111-4b39-b0fc-ffb79a331506-operator-scripts\") pod \"placement-db-create-nrjn8\" (UID: \"3b334154-7111-4b39-b0fc-ffb79a331506\") " pod="openstack/placement-db-create-nrjn8" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.589393 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj67d\" (UniqueName: \"kubernetes.io/projected/3b334154-7111-4b39-b0fc-ffb79a331506-kube-api-access-tj67d\") pod \"placement-db-create-nrjn8\" (UID: \"3b334154-7111-4b39-b0fc-ffb79a331506\") " pod="openstack/placement-db-create-nrjn8" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.676789 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzsn7\" (UniqueName: \"kubernetes.io/projected/00d3b68b-c9df-4c00-ae1f-079a98130251-kube-api-access-fzsn7\") pod \"placement-ee38-account-create-update-7hnjl\" (UID: \"00d3b68b-c9df-4c00-ae1f-079a98130251\") " pod="openstack/placement-ee38-account-create-update-7hnjl" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.677090 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00d3b68b-c9df-4c00-ae1f-079a98130251-operator-scripts\") pod \"placement-ee38-account-create-update-7hnjl\" (UID: \"00d3b68b-c9df-4c00-ae1f-079a98130251\") " pod="openstack/placement-ee38-account-create-update-7hnjl" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.677806 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00d3b68b-c9df-4c00-ae1f-079a98130251-operator-scripts\") pod \"placement-ee38-account-create-update-7hnjl\" (UID: \"00d3b68b-c9df-4c00-ae1f-079a98130251\") " pod="openstack/placement-ee38-account-create-update-7hnjl" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.683475 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-kwhwr"] Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.684937 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kwhwr" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.698169 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kwhwr"] Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.717390 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzsn7\" (UniqueName: \"kubernetes.io/projected/00d3b68b-c9df-4c00-ae1f-079a98130251-kube-api-access-fzsn7\") pod \"placement-ee38-account-create-update-7hnjl\" (UID: \"00d3b68b-c9df-4c00-ae1f-079a98130251\") " pod="openstack/placement-ee38-account-create-update-7hnjl" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.738470 4693 generic.go:334] "Generic (PLEG): container finished" podID="7656e7ff-beb0-46db-9955-05cf273af8fc" containerID="67419a34f724b9da685b3af4f8ff16c9fc1fb0a6d782e3b60aaedfcb41902565" exitCode=0 Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.738643 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-9g4v7" event={"ID":"7656e7ff-beb0-46db-9955-05cf273af8fc","Type":"ContainerDied","Data":"67419a34f724b9da685b3af4f8ff16c9fc1fb0a6d782e3b60aaedfcb41902565"} Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.741536 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d7d4f8db7-5gfv7_c0e7e551-57f3-4891-972f-4532f2fd50c4/console/0.log" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.741587 4693 generic.go:334] "Generic (PLEG): container finished" podID="c0e7e551-57f3-4891-972f-4532f2fd50c4" containerID="b3b2d924529d3f214244ec426a386be6f5a6d98e91c3ab05a3b25ed525025ffc" exitCode=2 Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.741653 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-cwzlg" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.741697 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d7d4f8db7-5gfv7" event={"ID":"c0e7e551-57f3-4891-972f-4532f2fd50c4","Type":"ContainerDied","Data":"b3b2d924529d3f214244ec426a386be6f5a6d98e91c3ab05a3b25ed525025ffc"} Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.741865 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-8580-account-create-update-nqj68" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.751817 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nrjn8" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.782411 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba5f3ece-76c3-42f5-ae1f-b47727c5217c-operator-scripts\") pod \"glance-db-create-kwhwr\" (UID: \"ba5f3ece-76c3-42f5-ae1f-b47727c5217c\") " pod="openstack/glance-db-create-kwhwr" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.782469 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwrq8\" (UniqueName: \"kubernetes.io/projected/ba5f3ece-76c3-42f5-ae1f-b47727c5217c-kube-api-access-nwrq8\") pod \"glance-db-create-kwhwr\" (UID: \"ba5f3ece-76c3-42f5-ae1f-b47727c5217c\") " pod="openstack/glance-db-create-kwhwr" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.798625 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ee38-account-create-update-7hnjl" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.802521 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-3cdf-account-create-update-5dd7p"] Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.804202 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3cdf-account-create-update-5dd7p" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.806355 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.818362 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3cdf-account-create-update-5dd7p"] Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.886797 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba5f3ece-76c3-42f5-ae1f-b47727c5217c-operator-scripts\") pod \"glance-db-create-kwhwr\" (UID: \"ba5f3ece-76c3-42f5-ae1f-b47727c5217c\") " pod="openstack/glance-db-create-kwhwr" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.886855 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwrq8\" (UniqueName: \"kubernetes.io/projected/ba5f3ece-76c3-42f5-ae1f-b47727c5217c-kube-api-access-nwrq8\") pod \"glance-db-create-kwhwr\" (UID: \"ba5f3ece-76c3-42f5-ae1f-b47727c5217c\") " pod="openstack/glance-db-create-kwhwr" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.887868 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba5f3ece-76c3-42f5-ae1f-b47727c5217c-operator-scripts\") pod \"glance-db-create-kwhwr\" (UID: \"ba5f3ece-76c3-42f5-ae1f-b47727c5217c\") " pod="openstack/glance-db-create-kwhwr" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.888997 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7pkd\" (UniqueName: \"kubernetes.io/projected/2244ed54-0d24-4961-b46b-eb6bf52ae2dc-kube-api-access-z7pkd\") pod \"glance-3cdf-account-create-update-5dd7p\" (UID: \"2244ed54-0d24-4961-b46b-eb6bf52ae2dc\") " pod="openstack/glance-3cdf-account-create-update-5dd7p" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.889144 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2244ed54-0d24-4961-b46b-eb6bf52ae2dc-operator-scripts\") pod \"glance-3cdf-account-create-update-5dd7p\" (UID: \"2244ed54-0d24-4961-b46b-eb6bf52ae2dc\") " pod="openstack/glance-3cdf-account-create-update-5dd7p" Dec 12 16:11:10 crc kubenswrapper[4693]: I1212 16:11:10.918950 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwrq8\" (UniqueName: \"kubernetes.io/projected/ba5f3ece-76c3-42f5-ae1f-b47727c5217c-kube-api-access-nwrq8\") pod \"glance-db-create-kwhwr\" (UID: \"ba5f3ece-76c3-42f5-ae1f-b47727c5217c\") " pod="openstack/glance-db-create-kwhwr" Dec 12 16:11:11 crc kubenswrapper[4693]: I1212 16:11:11.004441 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7pkd\" (UniqueName: \"kubernetes.io/projected/2244ed54-0d24-4961-b46b-eb6bf52ae2dc-kube-api-access-z7pkd\") pod \"glance-3cdf-account-create-update-5dd7p\" (UID: \"2244ed54-0d24-4961-b46b-eb6bf52ae2dc\") " pod="openstack/glance-3cdf-account-create-update-5dd7p" Dec 12 16:11:11 crc kubenswrapper[4693]: I1212 16:11:11.004589 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2244ed54-0d24-4961-b46b-eb6bf52ae2dc-operator-scripts\") pod \"glance-3cdf-account-create-update-5dd7p\" (UID: \"2244ed54-0d24-4961-b46b-eb6bf52ae2dc\") " pod="openstack/glance-3cdf-account-create-update-5dd7p" Dec 12 16:11:11 crc kubenswrapper[4693]: I1212 16:11:11.023140 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2244ed54-0d24-4961-b46b-eb6bf52ae2dc-operator-scripts\") pod \"glance-3cdf-account-create-update-5dd7p\" (UID: \"2244ed54-0d24-4961-b46b-eb6bf52ae2dc\") " pod="openstack/glance-3cdf-account-create-update-5dd7p" Dec 12 16:11:11 crc kubenswrapper[4693]: I1212 16:11:11.026216 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7pkd\" (UniqueName: \"kubernetes.io/projected/2244ed54-0d24-4961-b46b-eb6bf52ae2dc-kube-api-access-z7pkd\") pod \"glance-3cdf-account-create-update-5dd7p\" (UID: \"2244ed54-0d24-4961-b46b-eb6bf52ae2dc\") " pod="openstack/glance-3cdf-account-create-update-5dd7p" Dec 12 16:11:11 crc kubenswrapper[4693]: I1212 16:11:11.093508 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kwhwr" Dec 12 16:11:11 crc kubenswrapper[4693]: I1212 16:11:11.141729 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3cdf-account-create-update-5dd7p" Dec 12 16:11:12 crc kubenswrapper[4693]: I1212 16:11:12.332980 4693 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podc14f1710-5bb5-4333-b584-d5bff01ec285"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podc14f1710-5bb5-4333-b584-d5bff01ec285] : Timed out while waiting for systemd to remove kubepods-besteffort-podc14f1710_5bb5_4333_b584_d5bff01ec285.slice" Dec 12 16:11:12 crc kubenswrapper[4693]: E1212 16:11:12.333345 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podc14f1710-5bb5-4333-b584-d5bff01ec285] : unable to destroy cgroup paths for cgroup [kubepods besteffort podc14f1710-5bb5-4333-b584-d5bff01ec285] : Timed out while waiting for systemd to remove kubepods-besteffort-podc14f1710_5bb5_4333_b584_d5bff01ec285.slice" pod="openstack/dnsmasq-dns-78dd6ddcc-46hld" podUID="c14f1710-5bb5-4333-b584-d5bff01ec285" Dec 12 16:11:12 crc kubenswrapper[4693]: I1212 16:11:12.756786 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-46hld" Dec 12 16:11:12 crc kubenswrapper[4693]: I1212 16:11:12.810163 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-46hld"] Dec 12 16:11:12 crc kubenswrapper[4693]: I1212 16:11:12.815521 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-46hld"] Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.306997 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-9x29j"] Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.308969 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-9x29j" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.318139 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-9x29j"] Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.375799 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c14f1710-5bb5-4333-b584-d5bff01ec285" path="/var/lib/kubelet/pods/c14f1710-5bb5-4333-b584-d5bff01ec285/volumes" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.448466 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-170d-account-create-update-j789s"] Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.449972 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-170d-account-create-update-j789s" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.452096 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.457355 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-170d-account-create-update-j789s"] Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.468742 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qq5l\" (UniqueName: \"kubernetes.io/projected/8bd6e86f-1ef7-4178-8beb-70eadfa60001-kube-api-access-2qq5l\") pod \"mysqld-exporter-openstack-cell1-db-create-9x29j\" (UID: \"8bd6e86f-1ef7-4178-8beb-70eadfa60001\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-9x29j" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.468802 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bd6e86f-1ef7-4178-8beb-70eadfa60001-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-9x29j\" (UID: \"8bd6e86f-1ef7-4178-8beb-70eadfa60001\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-9x29j" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.570563 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qq5l\" (UniqueName: \"kubernetes.io/projected/8bd6e86f-1ef7-4178-8beb-70eadfa60001-kube-api-access-2qq5l\") pod \"mysqld-exporter-openstack-cell1-db-create-9x29j\" (UID: \"8bd6e86f-1ef7-4178-8beb-70eadfa60001\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-9x29j" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.570632 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bd6e86f-1ef7-4178-8beb-70eadfa60001-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-9x29j\" (UID: \"8bd6e86f-1ef7-4178-8beb-70eadfa60001\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-9x29j" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.570664 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qjwc\" (UniqueName: \"kubernetes.io/projected/3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874-kube-api-access-8qjwc\") pod \"mysqld-exporter-170d-account-create-update-j789s\" (UID: \"3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874\") " pod="openstack/mysqld-exporter-170d-account-create-update-j789s" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.570781 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874-operator-scripts\") pod \"mysqld-exporter-170d-account-create-update-j789s\" (UID: \"3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874\") " pod="openstack/mysqld-exporter-170d-account-create-update-j789s" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.572028 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bd6e86f-1ef7-4178-8beb-70eadfa60001-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-9x29j\" (UID: \"8bd6e86f-1ef7-4178-8beb-70eadfa60001\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-9x29j" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.595479 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qq5l\" (UniqueName: \"kubernetes.io/projected/8bd6e86f-1ef7-4178-8beb-70eadfa60001-kube-api-access-2qq5l\") pod \"mysqld-exporter-openstack-cell1-db-create-9x29j\" (UID: \"8bd6e86f-1ef7-4178-8beb-70eadfa60001\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-9x29j" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.636394 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-9x29j" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.673717 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874-operator-scripts\") pod \"mysqld-exporter-170d-account-create-update-j789s\" (UID: \"3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874\") " pod="openstack/mysqld-exporter-170d-account-create-update-j789s" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.674104 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qjwc\" (UniqueName: \"kubernetes.io/projected/3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874-kube-api-access-8qjwc\") pod \"mysqld-exporter-170d-account-create-update-j789s\" (UID: \"3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874\") " pod="openstack/mysqld-exporter-170d-account-create-update-j789s" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.674555 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874-operator-scripts\") pod \"mysqld-exporter-170d-account-create-update-j789s\" (UID: \"3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874\") " pod="openstack/mysqld-exporter-170d-account-create-update-j789s" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.695948 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qjwc\" (UniqueName: \"kubernetes.io/projected/3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874-kube-api-access-8qjwc\") pod \"mysqld-exporter-170d-account-create-update-j789s\" (UID: \"3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874\") " pod="openstack/mysqld-exporter-170d-account-create-update-j789s" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.771012 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-170d-account-create-update-j789s" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.784889 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-9g4v7" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.786283 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" event={"ID":"c229b29c-37d6-4d57-b983-9093c267bdef","Type":"ContainerDied","Data":"3170f843e13199b2e2b26bf34338b6708fa2bb92a6c4fb2aff50ce3d12a2bb56"} Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.786330 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3170f843e13199b2e2b26bf34338b6708fa2bb92a6c4fb2aff50ce3d12a2bb56" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.786734 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.795433 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-9g4v7" event={"ID":"7656e7ff-beb0-46db-9955-05cf273af8fc","Type":"ContainerDied","Data":"2e4ea47b63d4c53d797adc0805a75ea0640dc476fdb0a53ddfdc6325d1cb5f3f"} Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.795481 4693 scope.go:117] "RemoveContainer" containerID="67419a34f724b9da685b3af4f8ff16c9fc1fb0a6d782e3b60aaedfcb41902565" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.795540 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-9g4v7" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.877764 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs5gf\" (UniqueName: \"kubernetes.io/projected/7656e7ff-beb0-46db-9955-05cf273af8fc-kube-api-access-gs5gf\") pod \"7656e7ff-beb0-46db-9955-05cf273af8fc\" (UID: \"7656e7ff-beb0-46db-9955-05cf273af8fc\") " Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.877849 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7656e7ff-beb0-46db-9955-05cf273af8fc-config\") pod \"7656e7ff-beb0-46db-9955-05cf273af8fc\" (UID: \"7656e7ff-beb0-46db-9955-05cf273af8fc\") " Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.877883 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7656e7ff-beb0-46db-9955-05cf273af8fc-ovsdbserver-sb\") pod \"7656e7ff-beb0-46db-9955-05cf273af8fc\" (UID: \"7656e7ff-beb0-46db-9955-05cf273af8fc\") " Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.877903 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c229b29c-37d6-4d57-b983-9093c267bdef-dns-svc\") pod \"c229b29c-37d6-4d57-b983-9093c267bdef\" (UID: \"c229b29c-37d6-4d57-b983-9093c267bdef\") " Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.877920 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7656e7ff-beb0-46db-9955-05cf273af8fc-dns-svc\") pod \"7656e7ff-beb0-46db-9955-05cf273af8fc\" (UID: \"7656e7ff-beb0-46db-9955-05cf273af8fc\") " Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.878344 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c229b29c-37d6-4d57-b983-9093c267bdef-config\") pod \"c229b29c-37d6-4d57-b983-9093c267bdef\" (UID: \"c229b29c-37d6-4d57-b983-9093c267bdef\") " Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.878372 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-498zp\" (UniqueName: \"kubernetes.io/projected/c229b29c-37d6-4d57-b983-9093c267bdef-kube-api-access-498zp\") pod \"c229b29c-37d6-4d57-b983-9093c267bdef\" (UID: \"c229b29c-37d6-4d57-b983-9093c267bdef\") " Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.886830 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7656e7ff-beb0-46db-9955-05cf273af8fc-kube-api-access-gs5gf" (OuterVolumeSpecName: "kube-api-access-gs5gf") pod "7656e7ff-beb0-46db-9955-05cf273af8fc" (UID: "7656e7ff-beb0-46db-9955-05cf273af8fc"). InnerVolumeSpecName "kube-api-access-gs5gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.899797 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c229b29c-37d6-4d57-b983-9093c267bdef-kube-api-access-498zp" (OuterVolumeSpecName: "kube-api-access-498zp") pod "c229b29c-37d6-4d57-b983-9093c267bdef" (UID: "c229b29c-37d6-4d57-b983-9093c267bdef"). InnerVolumeSpecName "kube-api-access-498zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.927200 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7656e7ff-beb0-46db-9955-05cf273af8fc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7656e7ff-beb0-46db-9955-05cf273af8fc" (UID: "7656e7ff-beb0-46db-9955-05cf273af8fc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.934819 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7656e7ff-beb0-46db-9955-05cf273af8fc-config" (OuterVolumeSpecName: "config") pod "7656e7ff-beb0-46db-9955-05cf273af8fc" (UID: "7656e7ff-beb0-46db-9955-05cf273af8fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.940861 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7656e7ff-beb0-46db-9955-05cf273af8fc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7656e7ff-beb0-46db-9955-05cf273af8fc" (UID: "7656e7ff-beb0-46db-9955-05cf273af8fc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.967953 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c229b29c-37d6-4d57-b983-9093c267bdef-config" (OuterVolumeSpecName: "config") pod "c229b29c-37d6-4d57-b983-9093c267bdef" (UID: "c229b29c-37d6-4d57-b983-9093c267bdef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.972914 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c229b29c-37d6-4d57-b983-9093c267bdef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c229b29c-37d6-4d57-b983-9093c267bdef" (UID: "c229b29c-37d6-4d57-b983-9093c267bdef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.982586 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c229b29c-37d6-4d57-b983-9093c267bdef-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.982613 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-498zp\" (UniqueName: \"kubernetes.io/projected/c229b29c-37d6-4d57-b983-9093c267bdef-kube-api-access-498zp\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.982629 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs5gf\" (UniqueName: \"kubernetes.io/projected/7656e7ff-beb0-46db-9955-05cf273af8fc-kube-api-access-gs5gf\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.982643 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7656e7ff-beb0-46db-9955-05cf273af8fc-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.982656 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7656e7ff-beb0-46db-9955-05cf273af8fc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.982665 4693 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c229b29c-37d6-4d57-b983-9093c267bdef-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:13 crc kubenswrapper[4693]: I1212 16:11:13.982673 4693 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7656e7ff-beb0-46db-9955-05cf273af8fc-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.210958 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d7d4f8db7-5gfv7_c0e7e551-57f3-4891-972f-4532f2fd50c4/console/0.log" Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.211336 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.231113 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-9g4v7"] Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.246178 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-9g4v7"] Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.303670 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsmlg\" (UniqueName: \"kubernetes.io/projected/c0e7e551-57f3-4891-972f-4532f2fd50c4-kube-api-access-bsmlg\") pod \"c0e7e551-57f3-4891-972f-4532f2fd50c4\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.304045 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0e7e551-57f3-4891-972f-4532f2fd50c4-console-serving-cert\") pod \"c0e7e551-57f3-4891-972f-4532f2fd50c4\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.304081 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-oauth-serving-cert\") pod \"c0e7e551-57f3-4891-972f-4532f2fd50c4\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.304105 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-trusted-ca-bundle\") pod \"c0e7e551-57f3-4891-972f-4532f2fd50c4\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.304142 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-service-ca\") pod \"c0e7e551-57f3-4891-972f-4532f2fd50c4\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.305836 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c0e7e551-57f3-4891-972f-4532f2fd50c4" (UID: "c0e7e551-57f3-4891-972f-4532f2fd50c4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.306294 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c0e7e551-57f3-4891-972f-4532f2fd50c4" (UID: "c0e7e551-57f3-4891-972f-4532f2fd50c4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.308143 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-service-ca" (OuterVolumeSpecName: "service-ca") pod "c0e7e551-57f3-4891-972f-4532f2fd50c4" (UID: "c0e7e551-57f3-4891-972f-4532f2fd50c4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.321399 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e7e551-57f3-4891-972f-4532f2fd50c4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c0e7e551-57f3-4891-972f-4532f2fd50c4" (UID: "c0e7e551-57f3-4891-972f-4532f2fd50c4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.322919 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0e7e551-57f3-4891-972f-4532f2fd50c4-kube-api-access-bsmlg" (OuterVolumeSpecName: "kube-api-access-bsmlg") pod "c0e7e551-57f3-4891-972f-4532f2fd50c4" (UID: "c0e7e551-57f3-4891-972f-4532f2fd50c4"). InnerVolumeSpecName "kube-api-access-bsmlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.344244 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mbsvw"] Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.405807 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c0e7e551-57f3-4891-972f-4532f2fd50c4-console-oauth-config\") pod \"c0e7e551-57f3-4891-972f-4532f2fd50c4\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.405912 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-console-config\") pod \"c0e7e551-57f3-4891-972f-4532f2fd50c4\" (UID: \"c0e7e551-57f3-4891-972f-4532f2fd50c4\") " Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.407617 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-console-config" (OuterVolumeSpecName: "console-config") pod "c0e7e551-57f3-4891-972f-4532f2fd50c4" (UID: "c0e7e551-57f3-4891-972f-4532f2fd50c4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.410167 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e7e551-57f3-4891-972f-4532f2fd50c4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c0e7e551-57f3-4891-972f-4532f2fd50c4" (UID: "c0e7e551-57f3-4891-972f-4532f2fd50c4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.411569 4693 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-console-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.411597 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsmlg\" (UniqueName: \"kubernetes.io/projected/c0e7e551-57f3-4891-972f-4532f2fd50c4-kube-api-access-bsmlg\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.411612 4693 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0e7e551-57f3-4891-972f-4532f2fd50c4-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.411625 4693 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.411637 4693 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.411649 4693 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0e7e551-57f3-4891-972f-4532f2fd50c4-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.513251 4693 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c0e7e551-57f3-4891-972f-4532f2fd50c4-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.830620 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ee38-account-create-update-7hnjl"] Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.839108 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3fe5a970-1de1-4166-815b-81097dfe20ce","Type":"ContainerStarted","Data":"324679e43234b942e5d24cd9684a27d2ac0d2a121c40f883aa2c7d20f4b41c76"} Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.841326 4693 generic.go:334] "Generic (PLEG): container finished" podID="52f33022-5f32-4ca6-bc92-5753d41cd038" containerID="b7553a4dec27c71fbcd14aebfe5f357a4f40de90656c086582e6466a9a660ac0" exitCode=0 Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.841390 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7z76l" event={"ID":"52f33022-5f32-4ca6-bc92-5753d41cd038","Type":"ContainerDied","Data":"b7553a4dec27c71fbcd14aebfe5f357a4f40de90656c086582e6466a9a660ac0"} Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.842576 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74dd-account-create-update-5h6zk"] Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.845902 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mbsvw" event={"ID":"be946e65-78d9-4dd1-9a59-a290c6ae0f76","Type":"ContainerStarted","Data":"186ff25a114ed5f3e946aa661cbe45b977812bc8d0f3705735e11c1a465d8725"} Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.850952 4693 generic.go:334] "Generic (PLEG): container finished" podID="d45363e2-3684-4fc6-b322-d99e6e87d3fd" containerID="d9cff3e0e49928af617fc29c4e6db7c48889e3152f393792640f1c5d6d777637" exitCode=0 Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.851032 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"d45363e2-3684-4fc6-b322-d99e6e87d3fd","Type":"ContainerDied","Data":"d9cff3e0e49928af617fc29c4e6db7c48889e3152f393792640f1c5d6d777637"} Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.851329 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kwhwr"] Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.855958 4693 generic.go:334] "Generic (PLEG): container finished" podID="2d1046a8-e83f-4c4f-8ac3-1110bb6f62db" containerID="6263a31351a8b66e00866ea115ef46512c88016a614fa3e3c0c54fd7cd8bc30a" exitCode=0 Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.856023 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db","Type":"ContainerDied","Data":"6263a31351a8b66e00866ea115ef46512c88016a614fa3e3c0c54fd7cd8bc30a"} Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.857946 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d7d4f8db7-5gfv7_c0e7e551-57f3-4891-972f-4532f2fd50c4/console/0.log" Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.858033 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d7d4f8db7-5gfv7" Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.859600 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d7d4f8db7-5gfv7" event={"ID":"c0e7e551-57f3-4891-972f-4532f2fd50c4","Type":"ContainerDied","Data":"dfa5f8533d6aa2d2d578cab261b1cac64452a8f7fd40ddffd23e1f945f613caa"} Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.859649 4693 scope.go:117] "RemoveContainer" containerID="b3b2d924529d3f214244ec426a386be6f5a6d98e91c3ab05a3b25ed525025ffc" Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.884693 4693 generic.go:334] "Generic (PLEG): container finished" podID="6fd6556d-68c5-4492-804c-bc3188ab39b7" containerID="3be89743cd21d75b5646c0e66954be38785af6de6dae899ed1446b1366deecb8" exitCode=0 Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.884790 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6fd6556d-68c5-4492-804c-bc3188ab39b7","Type":"ContainerDied","Data":"3be89743cd21d75b5646c0e66954be38785af6de6dae899ed1446b1366deecb8"} Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.897461 4693 generic.go:334] "Generic (PLEG): container finished" podID="62a37a53-6f53-4b51-b493-edfdb42c3a93" containerID="22692ec927135356ca36a5a4cb2f6741bb3735d358eacd1c581a9edc0bb9e830" exitCode=0 Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.897554 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"62a37a53-6f53-4b51-b493-edfdb42c3a93","Type":"ContainerDied","Data":"22692ec927135356ca36a5a4cb2f6741bb3735d358eacd1c581a9edc0bb9e830"} Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.907879 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-j9m7j" event={"ID":"9bcffb4b-6142-4b31-8403-6b00de7fd93a","Type":"ContainerStarted","Data":"36429260ab50c43ff84bbd99e50853ce7f2124f8e89885dfec52e69653464c85"} Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.910980 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.911080 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-b8xvf" event={"ID":"3e230bf0-bb21-469f-ad05-1d061026d73f","Type":"ContainerStarted","Data":"d43f48af349e60a42b39537a1595f0ef8ccfe321d9ef68c51cad34e7fa908c97"} Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.916735 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-mbsvw" podStartSLOduration=5.916710205 podStartE2EDuration="5.916710205s" podCreationTimestamp="2025-12-12 16:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:11:14.909161422 +0000 UTC m=+1502.077801013" watchObservedRunningTime="2025-12-12 16:11:14.916710205 +0000 UTC m=+1502.085349816" Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.965449 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d7d4f8db7-5gfv7"] Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.972347 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5d7d4f8db7-5gfv7"] Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.981472 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-2thk6"] Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.989305 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-2thk6"] Dec 12 16:11:14 crc kubenswrapper[4693]: I1212 16:11:14.997283 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-j9m7j" podStartSLOduration=11.997253081 podStartE2EDuration="11.997253081s" podCreationTimestamp="2025-12-12 16:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:11:14.976905844 +0000 UTC m=+1502.145545435" watchObservedRunningTime="2025-12-12 16:11:14.997253081 +0000 UTC m=+1502.165892682" Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.044710 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-b8xvf" podStartSLOduration=4.722948645 podStartE2EDuration="17.044692287s" podCreationTimestamp="2025-12-12 16:10:58 +0000 UTC" firstStartedPulling="2025-12-12 16:11:01.494631354 +0000 UTC m=+1488.663270955" lastFinishedPulling="2025-12-12 16:11:13.816374996 +0000 UTC m=+1500.985014597" observedRunningTime="2025-12-12 16:11:15.01766756 +0000 UTC m=+1502.186307171" watchObservedRunningTime="2025-12-12 16:11:15.044692287 +0000 UTC m=+1502.213331878" Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.141555 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-9x29j"] Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.168121 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nrjn8"] Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.184113 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3cdf-account-create-update-5dd7p"] Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.197908 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-170d-account-create-update-j789s"] Dec 12 16:11:15 crc kubenswrapper[4693]: W1212 16:11:15.237243 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bce3ab0_98cb_4d62_aee5_d2cdcfdd9874.slice/crio-6ab6bb9a8f44de0e2dd0fb7bc1f41946244bd0a0fe6f96cfcd0d6945400277bf WatchSource:0}: Error finding container 6ab6bb9a8f44de0e2dd0fb7bc1f41946244bd0a0fe6f96cfcd0d6945400277bf: Status 404 returned error can't find the container with id 6ab6bb9a8f44de0e2dd0fb7bc1f41946244bd0a0fe6f96cfcd0d6945400277bf Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.246167 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.247855 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.248016 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.385457 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7656e7ff-beb0-46db-9955-05cf273af8fc" path="/var/lib/kubelet/pods/7656e7ff-beb0-46db-9955-05cf273af8fc/volumes" Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.386200 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0e7e551-57f3-4891-972f-4532f2fd50c4" path="/var/lib/kubelet/pods/c0e7e551-57f3-4891-972f-4532f2fd50c4/volumes" Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.386924 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c229b29c-37d6-4d57-b983-9093c267bdef" path="/var/lib/kubelet/pods/c229b29c-37d6-4d57-b983-9093c267bdef/volumes" Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.926695 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"d45363e2-3684-4fc6-b322-d99e6e87d3fd","Type":"ContainerStarted","Data":"e6cbcaca1b97d6adb98104d1522fb120a42fa7091ecaa1bf3915235161f4125f"} Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.928722 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.930358 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nrjn8" event={"ID":"3b334154-7111-4b39-b0fc-ffb79a331506","Type":"ContainerStarted","Data":"93930e3e6e66b5c066136eed9102052e2949ddc09b3afbf9c8a95d1a15ad0121"} Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.944394 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-170d-account-create-update-j789s" event={"ID":"3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874","Type":"ContainerStarted","Data":"6ab6bb9a8f44de0e2dd0fb7bc1f41946244bd0a0fe6f96cfcd0d6945400277bf"} Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.960246 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ee38-account-create-update-7hnjl" event={"ID":"00d3b68b-c9df-4c00-ae1f-079a98130251","Type":"ContainerStarted","Data":"474a35673195aeb1985e5f65caf335f98996b9159d53fa5acd8a61a53699f9a8"} Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.962936 4693 generic.go:334] "Generic (PLEG): container finished" podID="be946e65-78d9-4dd1-9a59-a290c6ae0f76" containerID="708e5cd1c3747dc8283fef87e0295162c90f581f11018a87fabcc27407436213" exitCode=0 Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.963000 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mbsvw" event={"ID":"be946e65-78d9-4dd1-9a59-a290c6ae0f76","Type":"ContainerDied","Data":"708e5cd1c3747dc8283fef87e0295162c90f581f11018a87fabcc27407436213"} Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.967056 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3cdf-account-create-update-5dd7p" event={"ID":"2244ed54-0d24-4961-b46b-eb6bf52ae2dc","Type":"ContainerStarted","Data":"139e365ed7fec59836157c4a45b8dcb0daa45913c35d7d400da93f464d0711b3"} Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.971235 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6fd6556d-68c5-4492-804c-bc3188ab39b7","Type":"ContainerStarted","Data":"d82e8161185290fbcb563b7b15a99b5b8f10e53b27ce24cf46414cb6c2f02d2f"} Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.971508 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.974903 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-9x29j" event={"ID":"8bd6e86f-1ef7-4178-8beb-70eadfa60001","Type":"ContainerStarted","Data":"dd9530fc0b68ddc9d09b3103aea0c444198de113f826163d42b11953b376ddd7"} Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.976417 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74dd-account-create-update-5h6zk" event={"ID":"0ea7a477-0eef-4e79-bd49-d4e152de7553","Type":"ContainerStarted","Data":"5a9049f5edc047726ff7e95c9315d7c91b8e629fcdf9a771658fb74a8872d66d"} Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.983892 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kwhwr" event={"ID":"ba5f3ece-76c3-42f5-ae1f-b47727c5217c","Type":"ContainerStarted","Data":"3c343840b4d4edbc099d049e17c3504d5f96c7110c5f44e969e612848817a152"} Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.983941 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kwhwr" event={"ID":"ba5f3ece-76c3-42f5-ae1f-b47727c5217c","Type":"ContainerStarted","Data":"0795727c30d12e04a3403f2b9135942b514d6950e0be4bf56cf8a27f8c7da89d"} Dec 12 16:11:15 crc kubenswrapper[4693]: I1212 16:11:15.990299 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=38.997372072 podStartE2EDuration="1m0.990257962s" podCreationTimestamp="2025-12-12 16:10:15 +0000 UTC" firstStartedPulling="2025-12-12 16:10:18.180464788 +0000 UTC m=+1445.349104389" lastFinishedPulling="2025-12-12 16:10:40.173350678 +0000 UTC m=+1467.341990279" observedRunningTime="2025-12-12 16:11:15.95299459 +0000 UTC m=+1503.121634201" watchObservedRunningTime="2025-12-12 16:11:15.990257962 +0000 UTC m=+1503.158897563" Dec 12 16:11:16 crc kubenswrapper[4693]: I1212 16:11:16.012423 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.288504553 podStartE2EDuration="1m0.012400868s" podCreationTimestamp="2025-12-12 16:10:16 +0000 UTC" firstStartedPulling="2025-12-12 16:10:18.407397622 +0000 UTC m=+1445.576037223" lastFinishedPulling="2025-12-12 16:10:40.131293937 +0000 UTC m=+1467.299933538" observedRunningTime="2025-12-12 16:11:16.009157881 +0000 UTC m=+1503.177797482" watchObservedRunningTime="2025-12-12 16:11:16.012400868 +0000 UTC m=+1503.181040469" Dec 12 16:11:16 crc kubenswrapper[4693]: I1212 16:11:16.030354 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-kwhwr" podStartSLOduration=6.03033225 podStartE2EDuration="6.03033225s" podCreationTimestamp="2025-12-12 16:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:11:16.025783808 +0000 UTC m=+1503.194423409" watchObservedRunningTime="2025-12-12 16:11:16.03033225 +0000 UTC m=+1503.198971851" Dec 12 16:11:16 crc kubenswrapper[4693]: I1212 16:11:16.991101 4693 generic.go:334] "Generic (PLEG): container finished" podID="8bd6e86f-1ef7-4178-8beb-70eadfa60001" containerID="abfcee745810706e3cc12fe27567f4011954ef9e61c60de2a02056eda83ea1f3" exitCode=0 Dec 12 16:11:16 crc kubenswrapper[4693]: I1212 16:11:16.991194 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-9x29j" event={"ID":"8bd6e86f-1ef7-4178-8beb-70eadfa60001","Type":"ContainerDied","Data":"abfcee745810706e3cc12fe27567f4011954ef9e61c60de2a02056eda83ea1f3"} Dec 12 16:11:16 crc kubenswrapper[4693]: I1212 16:11:16.993424 4693 generic.go:334] "Generic (PLEG): container finished" podID="0ea7a477-0eef-4e79-bd49-d4e152de7553" containerID="a2251985d994295837cb66a8a40ee806370a223d478b57901912d82c8033f2e4" exitCode=0 Dec 12 16:11:16 crc kubenswrapper[4693]: I1212 16:11:16.993534 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74dd-account-create-update-5h6zk" event={"ID":"0ea7a477-0eef-4e79-bd49-d4e152de7553","Type":"ContainerDied","Data":"a2251985d994295837cb66a8a40ee806370a223d478b57901912d82c8033f2e4"} Dec 12 16:11:16 crc kubenswrapper[4693]: I1212 16:11:16.995068 4693 generic.go:334] "Generic (PLEG): container finished" podID="2244ed54-0d24-4961-b46b-eb6bf52ae2dc" containerID="a0dd93bb76506bb93443932e086984f099e6e2fc799fe8b96feca58b382498f4" exitCode=0 Dec 12 16:11:16 crc kubenswrapper[4693]: I1212 16:11:16.995161 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3cdf-account-create-update-5dd7p" event={"ID":"2244ed54-0d24-4961-b46b-eb6bf52ae2dc","Type":"ContainerDied","Data":"a0dd93bb76506bb93443932e086984f099e6e2fc799fe8b96feca58b382498f4"} Dec 12 16:11:16 crc kubenswrapper[4693]: I1212 16:11:16.996562 4693 generic.go:334] "Generic (PLEG): container finished" podID="ba5f3ece-76c3-42f5-ae1f-b47727c5217c" containerID="3c343840b4d4edbc099d049e17c3504d5f96c7110c5f44e969e612848817a152" exitCode=0 Dec 12 16:11:16 crc kubenswrapper[4693]: I1212 16:11:16.996707 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kwhwr" event={"ID":"ba5f3ece-76c3-42f5-ae1f-b47727c5217c","Type":"ContainerDied","Data":"3c343840b4d4edbc099d049e17c3504d5f96c7110c5f44e969e612848817a152"} Dec 12 16:11:17 crc kubenswrapper[4693]: I1212 16:11:17.002729 4693 generic.go:334] "Generic (PLEG): container finished" podID="3b334154-7111-4b39-b0fc-ffb79a331506" containerID="bd14cc2089b15c29957b5209dba05a6d95edfc3415090bea0378f3bca81ba164" exitCode=0 Dec 12 16:11:17 crc kubenswrapper[4693]: I1212 16:11:17.002875 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nrjn8" event={"ID":"3b334154-7111-4b39-b0fc-ffb79a331506","Type":"ContainerDied","Data":"bd14cc2089b15c29957b5209dba05a6d95edfc3415090bea0378f3bca81ba164"} Dec 12 16:11:17 crc kubenswrapper[4693]: I1212 16:11:17.761136 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mbsvw" Dec 12 16:11:17 crc kubenswrapper[4693]: I1212 16:11:17.859171 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ghwj\" (UniqueName: \"kubernetes.io/projected/be946e65-78d9-4dd1-9a59-a290c6ae0f76-kube-api-access-7ghwj\") pod \"be946e65-78d9-4dd1-9a59-a290c6ae0f76\" (UID: \"be946e65-78d9-4dd1-9a59-a290c6ae0f76\") " Dec 12 16:11:17 crc kubenswrapper[4693]: I1212 16:11:17.859854 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be946e65-78d9-4dd1-9a59-a290c6ae0f76-operator-scripts\") pod \"be946e65-78d9-4dd1-9a59-a290c6ae0f76\" (UID: \"be946e65-78d9-4dd1-9a59-a290c6ae0f76\") " Dec 12 16:11:17 crc kubenswrapper[4693]: I1212 16:11:17.860760 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be946e65-78d9-4dd1-9a59-a290c6ae0f76-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be946e65-78d9-4dd1-9a59-a290c6ae0f76" (UID: "be946e65-78d9-4dd1-9a59-a290c6ae0f76"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:17 crc kubenswrapper[4693]: I1212 16:11:17.869708 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be946e65-78d9-4dd1-9a59-a290c6ae0f76-kube-api-access-7ghwj" (OuterVolumeSpecName: "kube-api-access-7ghwj") pod "be946e65-78d9-4dd1-9a59-a290c6ae0f76" (UID: "be946e65-78d9-4dd1-9a59-a290c6ae0f76"). InnerVolumeSpecName "kube-api-access-7ghwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:17 crc kubenswrapper[4693]: I1212 16:11:17.961596 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be946e65-78d9-4dd1-9a59-a290c6ae0f76-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:17 crc kubenswrapper[4693]: I1212 16:11:17.961629 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ghwj\" (UniqueName: \"kubernetes.io/projected/be946e65-78d9-4dd1-9a59-a290c6ae0f76-kube-api-access-7ghwj\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.014111 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3fe5a970-1de1-4166-815b-81097dfe20ce","Type":"ContainerStarted","Data":"21f42779038eb14690b3f96c1c10191dc464d901e3c445bc82fac04b3ac04ad7"} Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.015439 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ee38-account-create-update-7hnjl" event={"ID":"00d3b68b-c9df-4c00-ae1f-079a98130251","Type":"ContainerStarted","Data":"7f54dcdf921ba41fe1e8076b67643ad5f4e0b6158063a98398a4bb48a12e5889"} Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.020228 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"215ca5de-ce9f-4370-8aff-715dd1e384a3","Type":"ContainerStarted","Data":"688ecbace3de99fc0a9b6780caca106c44a29986d66c67f6bc356765ca702c56"} Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.020287 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"215ca5de-ce9f-4370-8aff-715dd1e384a3","Type":"ContainerStarted","Data":"b105c9ded490e7c88edc8ba7c3c4f5377050968b39ed6a553bf1ba3bce9814b9"} Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.020400 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.021902 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7z76l" event={"ID":"52f33022-5f32-4ca6-bc92-5753d41cd038","Type":"ContainerStarted","Data":"f8e2be273ad420bdb52a1c546cfb1d5a5358b756b210334bdfd4cff0c1e75dd1"} Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.022086 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-7z76l" Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.023695 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mbsvw" event={"ID":"be946e65-78d9-4dd1-9a59-a290c6ae0f76","Type":"ContainerDied","Data":"186ff25a114ed5f3e946aa661cbe45b977812bc8d0f3705735e11c1a465d8725"} Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.023735 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="186ff25a114ed5f3e946aa661cbe45b977812bc8d0f3705735e11c1a465d8725" Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.023708 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mbsvw" Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.025586 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"62a37a53-6f53-4b51-b493-edfdb42c3a93","Type":"ContainerStarted","Data":"1b413ccc7a0fa4d9e605786f276abb135dc8197b612a15236191c260017d896a"} Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.025825 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.029928 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db","Type":"ContainerStarted","Data":"faad30d19c72dd4da12844f0eb713da00223d8434d0e7eb5e4614acca5061063"} Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.030694 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.032882 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-170d-account-create-update-j789s" event={"ID":"3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874","Type":"ContainerStarted","Data":"ddfa940b4d974a7a9ee4247c01d6cdb79a242a43b2b1e299063d181f4328b89d"} Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.053163 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-ee38-account-create-update-7hnjl" podStartSLOduration=8.053140584 podStartE2EDuration="8.053140584s" podCreationTimestamp="2025-12-12 16:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:11:18.048615052 +0000 UTC m=+1505.217254673" watchObservedRunningTime="2025-12-12 16:11:18.053140584 +0000 UTC m=+1505.221780185" Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.075233 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=41.670863778 podStartE2EDuration="1m3.075215557s" podCreationTimestamp="2025-12-12 16:10:15 +0000 UTC" firstStartedPulling="2025-12-12 16:10:18.576973373 +0000 UTC m=+1445.745612974" lastFinishedPulling="2025-12-12 16:10:39.981325152 +0000 UTC m=+1467.149964753" observedRunningTime="2025-12-12 16:11:18.073039089 +0000 UTC m=+1505.241678690" watchObservedRunningTime="2025-12-12 16:11:18.075215557 +0000 UTC m=+1505.243855158" Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.096527 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-170d-account-create-update-j789s" podStartSLOduration=5.0965075 podStartE2EDuration="5.0965075s" podCreationTimestamp="2025-12-12 16:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:11:18.092632666 +0000 UTC m=+1505.261272267" watchObservedRunningTime="2025-12-12 16:11:18.0965075 +0000 UTC m=+1505.265147101" Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.160202 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=9.521574145 podStartE2EDuration="15.160181153s" podCreationTimestamp="2025-12-12 16:11:03 +0000 UTC" firstStartedPulling="2025-12-12 16:11:09.691348173 +0000 UTC m=+1496.859987764" lastFinishedPulling="2025-12-12 16:11:15.329955171 +0000 UTC m=+1502.498594772" observedRunningTime="2025-12-12 16:11:18.157931032 +0000 UTC m=+1505.326570633" watchObservedRunningTime="2025-12-12 16:11:18.160181153 +0000 UTC m=+1505.328820754" Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.239849 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-7z76l" podStartSLOduration=15.239827065 podStartE2EDuration="15.239827065s" podCreationTimestamp="2025-12-12 16:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:11:18.20467908 +0000 UTC m=+1505.373318701" watchObservedRunningTime="2025-12-12 16:11:18.239827065 +0000 UTC m=+1505.408466666" Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.252474 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=41.361309462 podStartE2EDuration="1m3.252456575s" podCreationTimestamp="2025-12-12 16:10:15 +0000 UTC" firstStartedPulling="2025-12-12 16:10:18.262358611 +0000 UTC m=+1445.430998212" lastFinishedPulling="2025-12-12 16:10:40.153505724 +0000 UTC m=+1467.322145325" observedRunningTime="2025-12-12 16:11:18.245626921 +0000 UTC m=+1505.414266512" watchObservedRunningTime="2025-12-12 16:11:18.252456575 +0000 UTC m=+1505.421096176" Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.407582 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7cb5889db5-2thk6" podUID="c229b29c-37d6-4d57-b983-9093c267bdef" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: i/o timeout" Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.471921 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kwhwr" Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.585079 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba5f3ece-76c3-42f5-ae1f-b47727c5217c-operator-scripts\") pod \"ba5f3ece-76c3-42f5-ae1f-b47727c5217c\" (UID: \"ba5f3ece-76c3-42f5-ae1f-b47727c5217c\") " Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.585424 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwrq8\" (UniqueName: \"kubernetes.io/projected/ba5f3ece-76c3-42f5-ae1f-b47727c5217c-kube-api-access-nwrq8\") pod \"ba5f3ece-76c3-42f5-ae1f-b47727c5217c\" (UID: \"ba5f3ece-76c3-42f5-ae1f-b47727c5217c\") " Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.585828 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba5f3ece-76c3-42f5-ae1f-b47727c5217c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba5f3ece-76c3-42f5-ae1f-b47727c5217c" (UID: "ba5f3ece-76c3-42f5-ae1f-b47727c5217c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.586148 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba5f3ece-76c3-42f5-ae1f-b47727c5217c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.590965 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba5f3ece-76c3-42f5-ae1f-b47727c5217c-kube-api-access-nwrq8" (OuterVolumeSpecName: "kube-api-access-nwrq8") pod "ba5f3ece-76c3-42f5-ae1f-b47727c5217c" (UID: "ba5f3ece-76c3-42f5-ae1f-b47727c5217c"). InnerVolumeSpecName "kube-api-access-nwrq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.688657 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwrq8\" (UniqueName: \"kubernetes.io/projected/ba5f3ece-76c3-42f5-ae1f-b47727c5217c-kube-api-access-nwrq8\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.931907 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nrjn8" Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.938613 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3cdf-account-create-update-5dd7p" Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.945838 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-9x29j" Dec 12 16:11:18 crc kubenswrapper[4693]: I1212 16:11:18.952309 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74dd-account-create-update-5h6zk" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.083897 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nrjn8" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.083929 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nrjn8" event={"ID":"3b334154-7111-4b39-b0fc-ffb79a331506","Type":"ContainerDied","Data":"93930e3e6e66b5c066136eed9102052e2949ddc09b3afbf9c8a95d1a15ad0121"} Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.083969 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93930e3e6e66b5c066136eed9102052e2949ddc09b3afbf9c8a95d1a15ad0121" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.085526 4693 generic.go:334] "Generic (PLEG): container finished" podID="3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874" containerID="ddfa940b4d974a7a9ee4247c01d6cdb79a242a43b2b1e299063d181f4328b89d" exitCode=0 Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.085593 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-170d-account-create-update-j789s" event={"ID":"3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874","Type":"ContainerDied","Data":"ddfa940b4d974a7a9ee4247c01d6cdb79a242a43b2b1e299063d181f4328b89d"} Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.090705 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-9x29j" event={"ID":"8bd6e86f-1ef7-4178-8beb-70eadfa60001","Type":"ContainerDied","Data":"dd9530fc0b68ddc9d09b3103aea0c444198de113f826163d42b11953b376ddd7"} Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.090750 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd9530fc0b68ddc9d09b3103aea0c444198de113f826163d42b11953b376ddd7" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.090807 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-9x29j" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.103692 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74dd-account-create-update-5h6zk" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.104162 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74dd-account-create-update-5h6zk" event={"ID":"0ea7a477-0eef-4e79-bd49-d4e152de7553","Type":"ContainerDied","Data":"5a9049f5edc047726ff7e95c9315d7c91b8e629fcdf9a771658fb74a8872d66d"} Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.104190 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a9049f5edc047726ff7e95c9315d7c91b8e629fcdf9a771658fb74a8872d66d" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.108679 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2244ed54-0d24-4961-b46b-eb6bf52ae2dc-operator-scripts\") pod \"2244ed54-0d24-4961-b46b-eb6bf52ae2dc\" (UID: \"2244ed54-0d24-4961-b46b-eb6bf52ae2dc\") " Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.108756 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj67d\" (UniqueName: \"kubernetes.io/projected/3b334154-7111-4b39-b0fc-ffb79a331506-kube-api-access-tj67d\") pod \"3b334154-7111-4b39-b0fc-ffb79a331506\" (UID: \"3b334154-7111-4b39-b0fc-ffb79a331506\") " Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.108861 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b334154-7111-4b39-b0fc-ffb79a331506-operator-scripts\") pod \"3b334154-7111-4b39-b0fc-ffb79a331506\" (UID: \"3b334154-7111-4b39-b0fc-ffb79a331506\") " Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.108893 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qq5l\" (UniqueName: \"kubernetes.io/projected/8bd6e86f-1ef7-4178-8beb-70eadfa60001-kube-api-access-2qq5l\") pod \"8bd6e86f-1ef7-4178-8beb-70eadfa60001\" (UID: \"8bd6e86f-1ef7-4178-8beb-70eadfa60001\") " Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.108922 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bd6e86f-1ef7-4178-8beb-70eadfa60001-operator-scripts\") pod \"8bd6e86f-1ef7-4178-8beb-70eadfa60001\" (UID: \"8bd6e86f-1ef7-4178-8beb-70eadfa60001\") " Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.108973 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7pkd\" (UniqueName: \"kubernetes.io/projected/2244ed54-0d24-4961-b46b-eb6bf52ae2dc-kube-api-access-z7pkd\") pod \"2244ed54-0d24-4961-b46b-eb6bf52ae2dc\" (UID: \"2244ed54-0d24-4961-b46b-eb6bf52ae2dc\") " Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.109046 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8tpd\" (UniqueName: \"kubernetes.io/projected/0ea7a477-0eef-4e79-bd49-d4e152de7553-kube-api-access-c8tpd\") pod \"0ea7a477-0eef-4e79-bd49-d4e152de7553\" (UID: \"0ea7a477-0eef-4e79-bd49-d4e152de7553\") " Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.109090 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea7a477-0eef-4e79-bd49-d4e152de7553-operator-scripts\") pod \"0ea7a477-0eef-4e79-bd49-d4e152de7553\" (UID: \"0ea7a477-0eef-4e79-bd49-d4e152de7553\") " Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.109880 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ea7a477-0eef-4e79-bd49-d4e152de7553-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ea7a477-0eef-4e79-bd49-d4e152de7553" (UID: "0ea7a477-0eef-4e79-bd49-d4e152de7553"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.110203 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bd6e86f-1ef7-4178-8beb-70eadfa60001-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8bd6e86f-1ef7-4178-8beb-70eadfa60001" (UID: "8bd6e86f-1ef7-4178-8beb-70eadfa60001"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.112322 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2244ed54-0d24-4961-b46b-eb6bf52ae2dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2244ed54-0d24-4961-b46b-eb6bf52ae2dc" (UID: "2244ed54-0d24-4961-b46b-eb6bf52ae2dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.113457 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b334154-7111-4b39-b0fc-ffb79a331506-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b334154-7111-4b39-b0fc-ffb79a331506" (UID: "3b334154-7111-4b39-b0fc-ffb79a331506"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.117234 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2244ed54-0d24-4961-b46b-eb6bf52ae2dc-kube-api-access-z7pkd" (OuterVolumeSpecName: "kube-api-access-z7pkd") pod "2244ed54-0d24-4961-b46b-eb6bf52ae2dc" (UID: "2244ed54-0d24-4961-b46b-eb6bf52ae2dc"). InnerVolumeSpecName "kube-api-access-z7pkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.119485 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bd6e86f-1ef7-4178-8beb-70eadfa60001-kube-api-access-2qq5l" (OuterVolumeSpecName: "kube-api-access-2qq5l") pod "8bd6e86f-1ef7-4178-8beb-70eadfa60001" (UID: "8bd6e86f-1ef7-4178-8beb-70eadfa60001"). InnerVolumeSpecName "kube-api-access-2qq5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.119552 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ea7a477-0eef-4e79-bd49-d4e152de7553-kube-api-access-c8tpd" (OuterVolumeSpecName: "kube-api-access-c8tpd") pod "0ea7a477-0eef-4e79-bd49-d4e152de7553" (UID: "0ea7a477-0eef-4e79-bd49-d4e152de7553"). InnerVolumeSpecName "kube-api-access-c8tpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.119608 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b334154-7111-4b39-b0fc-ffb79a331506-kube-api-access-tj67d" (OuterVolumeSpecName: "kube-api-access-tj67d") pod "3b334154-7111-4b39-b0fc-ffb79a331506" (UID: "3b334154-7111-4b39-b0fc-ffb79a331506"). InnerVolumeSpecName "kube-api-access-tj67d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.119886 4693 generic.go:334] "Generic (PLEG): container finished" podID="00d3b68b-c9df-4c00-ae1f-079a98130251" containerID="7f54dcdf921ba41fe1e8076b67643ad5f4e0b6158063a98398a4bb48a12e5889" exitCode=0 Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.120451 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ee38-account-create-update-7hnjl" event={"ID":"00d3b68b-c9df-4c00-ae1f-079a98130251","Type":"ContainerDied","Data":"7f54dcdf921ba41fe1e8076b67643ad5f4e0b6158063a98398a4bb48a12e5889"} Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.128976 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3cdf-account-create-update-5dd7p" event={"ID":"2244ed54-0d24-4961-b46b-eb6bf52ae2dc","Type":"ContainerDied","Data":"139e365ed7fec59836157c4a45b8dcb0daa45913c35d7d400da93f464d0711b3"} Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.129022 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="139e365ed7fec59836157c4a45b8dcb0daa45913c35d7d400da93f464d0711b3" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.129096 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3cdf-account-create-update-5dd7p" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.141945 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kwhwr" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.141994 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kwhwr" event={"ID":"ba5f3ece-76c3-42f5-ae1f-b47727c5217c","Type":"ContainerDied","Data":"0795727c30d12e04a3403f2b9135942b514d6950e0be4bf56cf8a27f8c7da89d"} Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.142070 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0795727c30d12e04a3403f2b9135942b514d6950e0be4bf56cf8a27f8c7da89d" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.211248 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2244ed54-0d24-4961-b46b-eb6bf52ae2dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.211298 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj67d\" (UniqueName: \"kubernetes.io/projected/3b334154-7111-4b39-b0fc-ffb79a331506-kube-api-access-tj67d\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.211310 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b334154-7111-4b39-b0fc-ffb79a331506-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.211318 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qq5l\" (UniqueName: \"kubernetes.io/projected/8bd6e86f-1ef7-4178-8beb-70eadfa60001-kube-api-access-2qq5l\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.211329 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bd6e86f-1ef7-4178-8beb-70eadfa60001-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.211337 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7pkd\" (UniqueName: \"kubernetes.io/projected/2244ed54-0d24-4961-b46b-eb6bf52ae2dc-kube-api-access-z7pkd\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.211346 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8tpd\" (UniqueName: \"kubernetes.io/projected/0ea7a477-0eef-4e79-bd49-d4e152de7553-kube-api-access-c8tpd\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:19 crc kubenswrapper[4693]: I1212 16:11:19.211355 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea7a477-0eef-4e79-bd49-d4e152de7553-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:20 crc kubenswrapper[4693]: I1212 16:11:20.530229 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-tzjth" podUID="636fcc75-4f63-4bf9-bcfe-8d0720896f25" containerName="ovn-controller" probeResult="failure" output=< Dec 12 16:11:20 crc kubenswrapper[4693]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 12 16:11:20 crc kubenswrapper[4693]: > Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.003354 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-r9p8t"] Dec 12 16:11:21 crc kubenswrapper[4693]: E1212 16:11:21.004105 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bd6e86f-1ef7-4178-8beb-70eadfa60001" containerName="mariadb-database-create" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.004122 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bd6e86f-1ef7-4178-8beb-70eadfa60001" containerName="mariadb-database-create" Dec 12 16:11:21 crc kubenswrapper[4693]: E1212 16:11:21.004136 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b334154-7111-4b39-b0fc-ffb79a331506" containerName="mariadb-database-create" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.004144 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b334154-7111-4b39-b0fc-ffb79a331506" containerName="mariadb-database-create" Dec 12 16:11:21 crc kubenswrapper[4693]: E1212 16:11:21.004157 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be946e65-78d9-4dd1-9a59-a290c6ae0f76" containerName="mariadb-database-create" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.004163 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="be946e65-78d9-4dd1-9a59-a290c6ae0f76" containerName="mariadb-database-create" Dec 12 16:11:21 crc kubenswrapper[4693]: E1212 16:11:21.004174 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ea7a477-0eef-4e79-bd49-d4e152de7553" containerName="mariadb-account-create-update" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.004180 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ea7a477-0eef-4e79-bd49-d4e152de7553" containerName="mariadb-account-create-update" Dec 12 16:11:21 crc kubenswrapper[4693]: E1212 16:11:21.004194 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7656e7ff-beb0-46db-9955-05cf273af8fc" containerName="init" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.004200 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="7656e7ff-beb0-46db-9955-05cf273af8fc" containerName="init" Dec 12 16:11:21 crc kubenswrapper[4693]: E1212 16:11:21.004212 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c229b29c-37d6-4d57-b983-9093c267bdef" containerName="init" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.004219 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="c229b29c-37d6-4d57-b983-9093c267bdef" containerName="init" Dec 12 16:11:21 crc kubenswrapper[4693]: E1212 16:11:21.004233 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c229b29c-37d6-4d57-b983-9093c267bdef" containerName="dnsmasq-dns" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.004240 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="c229b29c-37d6-4d57-b983-9093c267bdef" containerName="dnsmasq-dns" Dec 12 16:11:21 crc kubenswrapper[4693]: E1212 16:11:21.004256 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2244ed54-0d24-4961-b46b-eb6bf52ae2dc" containerName="mariadb-account-create-update" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.004264 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="2244ed54-0d24-4961-b46b-eb6bf52ae2dc" containerName="mariadb-account-create-update" Dec 12 16:11:21 crc kubenswrapper[4693]: E1212 16:11:21.004298 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5f3ece-76c3-42f5-ae1f-b47727c5217c" containerName="mariadb-database-create" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.004308 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5f3ece-76c3-42f5-ae1f-b47727c5217c" containerName="mariadb-database-create" Dec 12 16:11:21 crc kubenswrapper[4693]: E1212 16:11:21.004329 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0e7e551-57f3-4891-972f-4532f2fd50c4" containerName="console" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.004338 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0e7e551-57f3-4891-972f-4532f2fd50c4" containerName="console" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.004522 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="7656e7ff-beb0-46db-9955-05cf273af8fc" containerName="init" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.004538 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba5f3ece-76c3-42f5-ae1f-b47727c5217c" containerName="mariadb-database-create" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.004549 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ea7a477-0eef-4e79-bd49-d4e152de7553" containerName="mariadb-account-create-update" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.004559 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b334154-7111-4b39-b0fc-ffb79a331506" containerName="mariadb-database-create" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.004568 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bd6e86f-1ef7-4178-8beb-70eadfa60001" containerName="mariadb-database-create" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.004577 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="2244ed54-0d24-4961-b46b-eb6bf52ae2dc" containerName="mariadb-account-create-update" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.004591 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="be946e65-78d9-4dd1-9a59-a290c6ae0f76" containerName="mariadb-database-create" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.004600 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="c229b29c-37d6-4d57-b983-9093c267bdef" containerName="dnsmasq-dns" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.004605 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0e7e551-57f3-4891-972f-4532f2fd50c4" containerName="console" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.005411 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r9p8t" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.009211 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2hqqj" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.020449 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.026221 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-r9p8t"] Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.150286 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/56fb0f10-fbce-4aed-9a10-7128021ce48f-db-sync-config-data\") pod \"glance-db-sync-r9p8t\" (UID: \"56fb0f10-fbce-4aed-9a10-7128021ce48f\") " pod="openstack/glance-db-sync-r9p8t" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.150343 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptpvk\" (UniqueName: \"kubernetes.io/projected/56fb0f10-fbce-4aed-9a10-7128021ce48f-kube-api-access-ptpvk\") pod \"glance-db-sync-r9p8t\" (UID: \"56fb0f10-fbce-4aed-9a10-7128021ce48f\") " pod="openstack/glance-db-sync-r9p8t" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.150363 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56fb0f10-fbce-4aed-9a10-7128021ce48f-combined-ca-bundle\") pod \"glance-db-sync-r9p8t\" (UID: \"56fb0f10-fbce-4aed-9a10-7128021ce48f\") " pod="openstack/glance-db-sync-r9p8t" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.150451 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56fb0f10-fbce-4aed-9a10-7128021ce48f-config-data\") pod \"glance-db-sync-r9p8t\" (UID: \"56fb0f10-fbce-4aed-9a10-7128021ce48f\") " pod="openstack/glance-db-sync-r9p8t" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.252622 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/56fb0f10-fbce-4aed-9a10-7128021ce48f-db-sync-config-data\") pod \"glance-db-sync-r9p8t\" (UID: \"56fb0f10-fbce-4aed-9a10-7128021ce48f\") " pod="openstack/glance-db-sync-r9p8t" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.252683 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptpvk\" (UniqueName: \"kubernetes.io/projected/56fb0f10-fbce-4aed-9a10-7128021ce48f-kube-api-access-ptpvk\") pod \"glance-db-sync-r9p8t\" (UID: \"56fb0f10-fbce-4aed-9a10-7128021ce48f\") " pod="openstack/glance-db-sync-r9p8t" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.252709 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56fb0f10-fbce-4aed-9a10-7128021ce48f-combined-ca-bundle\") pod \"glance-db-sync-r9p8t\" (UID: \"56fb0f10-fbce-4aed-9a10-7128021ce48f\") " pod="openstack/glance-db-sync-r9p8t" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.252832 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56fb0f10-fbce-4aed-9a10-7128021ce48f-config-data\") pod \"glance-db-sync-r9p8t\" (UID: \"56fb0f10-fbce-4aed-9a10-7128021ce48f\") " pod="openstack/glance-db-sync-r9p8t" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.262400 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/56fb0f10-fbce-4aed-9a10-7128021ce48f-db-sync-config-data\") pod \"glance-db-sync-r9p8t\" (UID: \"56fb0f10-fbce-4aed-9a10-7128021ce48f\") " pod="openstack/glance-db-sync-r9p8t" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.270667 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56fb0f10-fbce-4aed-9a10-7128021ce48f-combined-ca-bundle\") pod \"glance-db-sync-r9p8t\" (UID: \"56fb0f10-fbce-4aed-9a10-7128021ce48f\") " pod="openstack/glance-db-sync-r9p8t" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.271951 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56fb0f10-fbce-4aed-9a10-7128021ce48f-config-data\") pod \"glance-db-sync-r9p8t\" (UID: \"56fb0f10-fbce-4aed-9a10-7128021ce48f\") " pod="openstack/glance-db-sync-r9p8t" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.276467 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptpvk\" (UniqueName: \"kubernetes.io/projected/56fb0f10-fbce-4aed-9a10-7128021ce48f-kube-api-access-ptpvk\") pod \"glance-db-sync-r9p8t\" (UID: \"56fb0f10-fbce-4aed-9a10-7128021ce48f\") " pod="openstack/glance-db-sync-r9p8t" Dec 12 16:11:21 crc kubenswrapper[4693]: I1212 16:11:21.342209 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r9p8t" Dec 12 16:11:22 crc kubenswrapper[4693]: I1212 16:11:22.138888 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ee38-account-create-update-7hnjl" Dec 12 16:11:22 crc kubenswrapper[4693]: I1212 16:11:22.147643 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-170d-account-create-update-j789s" Dec 12 16:11:22 crc kubenswrapper[4693]: I1212 16:11:22.196085 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ee38-account-create-update-7hnjl" Dec 12 16:11:22 crc kubenswrapper[4693]: I1212 16:11:22.196200 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ee38-account-create-update-7hnjl" event={"ID":"00d3b68b-c9df-4c00-ae1f-079a98130251","Type":"ContainerDied","Data":"474a35673195aeb1985e5f65caf335f98996b9159d53fa5acd8a61a53699f9a8"} Dec 12 16:11:22 crc kubenswrapper[4693]: I1212 16:11:22.196366 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="474a35673195aeb1985e5f65caf335f98996b9159d53fa5acd8a61a53699f9a8" Dec 12 16:11:22 crc kubenswrapper[4693]: I1212 16:11:22.203901 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-170d-account-create-update-j789s" event={"ID":"3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874","Type":"ContainerDied","Data":"6ab6bb9a8f44de0e2dd0fb7bc1f41946244bd0a0fe6f96cfcd0d6945400277bf"} Dec 12 16:11:22 crc kubenswrapper[4693]: I1212 16:11:22.203942 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ab6bb9a8f44de0e2dd0fb7bc1f41946244bd0a0fe6f96cfcd0d6945400277bf" Dec 12 16:11:22 crc kubenswrapper[4693]: I1212 16:11:22.203943 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-170d-account-create-update-j789s" Dec 12 16:11:22 crc kubenswrapper[4693]: I1212 16:11:22.281448 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qjwc\" (UniqueName: \"kubernetes.io/projected/3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874-kube-api-access-8qjwc\") pod \"3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874\" (UID: \"3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874\") " Dec 12 16:11:22 crc kubenswrapper[4693]: I1212 16:11:22.281593 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00d3b68b-c9df-4c00-ae1f-079a98130251-operator-scripts\") pod \"00d3b68b-c9df-4c00-ae1f-079a98130251\" (UID: \"00d3b68b-c9df-4c00-ae1f-079a98130251\") " Dec 12 16:11:22 crc kubenswrapper[4693]: I1212 16:11:22.282335 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00d3b68b-c9df-4c00-ae1f-079a98130251-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "00d3b68b-c9df-4c00-ae1f-079a98130251" (UID: "00d3b68b-c9df-4c00-ae1f-079a98130251"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:22 crc kubenswrapper[4693]: I1212 16:11:22.282524 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874-operator-scripts\") pod \"3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874\" (UID: \"3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874\") " Dec 12 16:11:22 crc kubenswrapper[4693]: I1212 16:11:22.282571 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzsn7\" (UniqueName: \"kubernetes.io/projected/00d3b68b-c9df-4c00-ae1f-079a98130251-kube-api-access-fzsn7\") pod \"00d3b68b-c9df-4c00-ae1f-079a98130251\" (UID: \"00d3b68b-c9df-4c00-ae1f-079a98130251\") " Dec 12 16:11:22 crc kubenswrapper[4693]: I1212 16:11:22.282939 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874" (UID: "3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:22 crc kubenswrapper[4693]: I1212 16:11:22.283664 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:22 crc kubenswrapper[4693]: I1212 16:11:22.283691 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00d3b68b-c9df-4c00-ae1f-079a98130251-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:22 crc kubenswrapper[4693]: I1212 16:11:22.301699 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874-kube-api-access-8qjwc" (OuterVolumeSpecName: "kube-api-access-8qjwc") pod "3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874" (UID: "3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874"). InnerVolumeSpecName "kube-api-access-8qjwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:22 crc kubenswrapper[4693]: I1212 16:11:22.303836 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00d3b68b-c9df-4c00-ae1f-079a98130251-kube-api-access-fzsn7" (OuterVolumeSpecName: "kube-api-access-fzsn7") pod "00d3b68b-c9df-4c00-ae1f-079a98130251" (UID: "00d3b68b-c9df-4c00-ae1f-079a98130251"). InnerVolumeSpecName "kube-api-access-fzsn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:22 crc kubenswrapper[4693]: I1212 16:11:22.385790 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzsn7\" (UniqueName: \"kubernetes.io/projected/00d3b68b-c9df-4c00-ae1f-079a98130251-kube-api-access-fzsn7\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:22 crc kubenswrapper[4693]: I1212 16:11:22.385831 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qjwc\" (UniqueName: \"kubernetes.io/projected/3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874-kube-api-access-8qjwc\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:22 crc kubenswrapper[4693]: I1212 16:11:22.827521 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-r9p8t"] Dec 12 16:11:22 crc kubenswrapper[4693]: W1212 16:11:22.833654 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56fb0f10_fbce_4aed_9a10_7128021ce48f.slice/crio-a66336a94509a7ea8c4cf6b20a3a9396b4da458f90da3fa68e975e4781d37e0e WatchSource:0}: Error finding container a66336a94509a7ea8c4cf6b20a3a9396b4da458f90da3fa68e975e4781d37e0e: Status 404 returned error can't find the container with id a66336a94509a7ea8c4cf6b20a3a9396b4da458f90da3fa68e975e4781d37e0e Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.221084 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3fe5a970-1de1-4166-815b-81097dfe20ce","Type":"ContainerStarted","Data":"1c41d09fc575116b1ddb907fbc3208b21a8dbdfe93cf53a679d199cb01fb42a6"} Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.224396 4693 generic.go:334] "Generic (PLEG): container finished" podID="3e230bf0-bb21-469f-ad05-1d061026d73f" containerID="d43f48af349e60a42b39537a1595f0ef8ccfe321d9ef68c51cad34e7fa908c97" exitCode=0 Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.224444 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-b8xvf" event={"ID":"3e230bf0-bb21-469f-ad05-1d061026d73f","Type":"ContainerDied","Data":"d43f48af349e60a42b39537a1595f0ef8ccfe321d9ef68c51cad34e7fa908c97"} Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.225972 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r9p8t" event={"ID":"56fb0f10-fbce-4aed-9a10-7128021ce48f","Type":"ContainerStarted","Data":"a66336a94509a7ea8c4cf6b20a3a9396b4da458f90da3fa68e975e4781d37e0e"} Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.245032 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=19.621652606 podStartE2EDuration="1m1.245016973s" podCreationTimestamp="2025-12-12 16:10:22 +0000 UTC" firstStartedPulling="2025-12-12 16:10:40.692498743 +0000 UTC m=+1467.861138344" lastFinishedPulling="2025-12-12 16:11:22.31586311 +0000 UTC m=+1509.484502711" observedRunningTime="2025-12-12 16:11:23.244319074 +0000 UTC m=+1510.412958675" watchObservedRunningTime="2025-12-12 16:11:23.245016973 +0000 UTC m=+1510.413656574" Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.681748 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Dec 12 16:11:23 crc kubenswrapper[4693]: E1212 16:11:23.682500 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874" containerName="mariadb-account-create-update" Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.682519 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874" containerName="mariadb-account-create-update" Dec 12 16:11:23 crc kubenswrapper[4693]: E1212 16:11:23.682534 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d3b68b-c9df-4c00-ae1f-079a98130251" containerName="mariadb-account-create-update" Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.682542 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d3b68b-c9df-4c00-ae1f-079a98130251" containerName="mariadb-account-create-update" Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.682760 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="00d3b68b-c9df-4c00-ae1f-079a98130251" containerName="mariadb-account-create-update" Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.682793 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874" containerName="mariadb-account-create-update" Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.683508 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.685578 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.709052 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.713662 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d90eb59-c661-4bec-ac19-87304c2c6f00-config-data\") pod \"mysqld-exporter-0\" (UID: \"3d90eb59-c661-4bec-ac19-87304c2c6f00\") " pod="openstack/mysqld-exporter-0" Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.713772 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8r57\" (UniqueName: \"kubernetes.io/projected/3d90eb59-c661-4bec-ac19-87304c2c6f00-kube-api-access-d8r57\") pod \"mysqld-exporter-0\" (UID: \"3d90eb59-c661-4bec-ac19-87304c2c6f00\") " pod="openstack/mysqld-exporter-0" Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.713911 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d90eb59-c661-4bec-ac19-87304c2c6f00-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"3d90eb59-c661-4bec-ac19-87304c2c6f00\") " pod="openstack/mysqld-exporter-0" Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.815645 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d90eb59-c661-4bec-ac19-87304c2c6f00-config-data\") pod \"mysqld-exporter-0\" (UID: \"3d90eb59-c661-4bec-ac19-87304c2c6f00\") " pod="openstack/mysqld-exporter-0" Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.816645 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8r57\" (UniqueName: \"kubernetes.io/projected/3d90eb59-c661-4bec-ac19-87304c2c6f00-kube-api-access-d8r57\") pod \"mysqld-exporter-0\" (UID: \"3d90eb59-c661-4bec-ac19-87304c2c6f00\") " pod="openstack/mysqld-exporter-0" Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.816779 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d90eb59-c661-4bec-ac19-87304c2c6f00-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"3d90eb59-c661-4bec-ac19-87304c2c6f00\") " pod="openstack/mysqld-exporter-0" Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.834489 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8r57\" (UniqueName: \"kubernetes.io/projected/3d90eb59-c661-4bec-ac19-87304c2c6f00-kube-api-access-d8r57\") pod \"mysqld-exporter-0\" (UID: \"3d90eb59-c661-4bec-ac19-87304c2c6f00\") " pod="openstack/mysqld-exporter-0" Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.835589 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d90eb59-c661-4bec-ac19-87304c2c6f00-config-data\") pod \"mysqld-exporter-0\" (UID: \"3d90eb59-c661-4bec-ac19-87304c2c6f00\") " pod="openstack/mysqld-exporter-0" Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.851055 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d90eb59-c661-4bec-ac19-87304c2c6f00-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"3d90eb59-c661-4bec-ac19-87304c2c6f00\") " pod="openstack/mysqld-exporter-0" Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.855482 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-7z76l" Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.953010 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6cwfg"] Dec 12 16:11:23 crc kubenswrapper[4693]: I1212 16:11:23.953253 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" podUID="b5827a44-9073-412a-90ec-653b5ac3f5fd" containerName="dnsmasq-dns" containerID="cri-o://fecdd48ca99acb8b9346023cbbbfcb64889582cae9090ca80a85da8870bf3172" gracePeriod=10 Dec 12 16:11:24 crc kubenswrapper[4693]: I1212 16:11:24.040463 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Dec 12 16:11:24 crc kubenswrapper[4693]: I1212 16:11:24.278982 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:24 crc kubenswrapper[4693]: I1212 16:11:24.279606 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:24 crc kubenswrapper[4693]: I1212 16:11:24.303768 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:24 crc kubenswrapper[4693]: I1212 16:11:24.339577 4693 generic.go:334] "Generic (PLEG): container finished" podID="b5827a44-9073-412a-90ec-653b5ac3f5fd" containerID="fecdd48ca99acb8b9346023cbbbfcb64889582cae9090ca80a85da8870bf3172" exitCode=0 Dec 12 16:11:24 crc kubenswrapper[4693]: I1212 16:11:24.339703 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" event={"ID":"b5827a44-9073-412a-90ec-653b5ac3f5fd","Type":"ContainerDied","Data":"fecdd48ca99acb8b9346023cbbbfcb64889582cae9090ca80a85da8870bf3172"} Dec 12 16:11:24 crc kubenswrapper[4693]: I1212 16:11:24.345700 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:24 crc kubenswrapper[4693]: I1212 16:11:24.726723 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" Dec 12 16:11:24 crc kubenswrapper[4693]: I1212 16:11:24.815458 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Dec 12 16:11:24 crc kubenswrapper[4693]: I1212 16:11:24.842536 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5827a44-9073-412a-90ec-653b5ac3f5fd-dns-svc\") pod \"b5827a44-9073-412a-90ec-653b5ac3f5fd\" (UID: \"b5827a44-9073-412a-90ec-653b5ac3f5fd\") " Dec 12 16:11:24 crc kubenswrapper[4693]: I1212 16:11:24.842691 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5827a44-9073-412a-90ec-653b5ac3f5fd-config\") pod \"b5827a44-9073-412a-90ec-653b5ac3f5fd\" (UID: \"b5827a44-9073-412a-90ec-653b5ac3f5fd\") " Dec 12 16:11:24 crc kubenswrapper[4693]: I1212 16:11:24.842748 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ps9j\" (UniqueName: \"kubernetes.io/projected/b5827a44-9073-412a-90ec-653b5ac3f5fd-kube-api-access-5ps9j\") pod \"b5827a44-9073-412a-90ec-653b5ac3f5fd\" (UID: \"b5827a44-9073-412a-90ec-653b5ac3f5fd\") " Dec 12 16:11:24 crc kubenswrapper[4693]: I1212 16:11:24.873998 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5827a44-9073-412a-90ec-653b5ac3f5fd-kube-api-access-5ps9j" (OuterVolumeSpecName: "kube-api-access-5ps9j") pod "b5827a44-9073-412a-90ec-653b5ac3f5fd" (UID: "b5827a44-9073-412a-90ec-653b5ac3f5fd"). InnerVolumeSpecName "kube-api-access-5ps9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:24 crc kubenswrapper[4693]: I1212 16:11:24.939287 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5827a44-9073-412a-90ec-653b5ac3f5fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b5827a44-9073-412a-90ec-653b5ac3f5fd" (UID: "b5827a44-9073-412a-90ec-653b5ac3f5fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:24 crc kubenswrapper[4693]: I1212 16:11:24.946580 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ps9j\" (UniqueName: \"kubernetes.io/projected/b5827a44-9073-412a-90ec-653b5ac3f5fd-kube-api-access-5ps9j\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:24 crc kubenswrapper[4693]: I1212 16:11:24.946613 4693 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5827a44-9073-412a-90ec-653b5ac3f5fd-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:24 crc kubenswrapper[4693]: I1212 16:11:24.958291 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5827a44-9073-412a-90ec-653b5ac3f5fd-config" (OuterVolumeSpecName: "config") pod "b5827a44-9073-412a-90ec-653b5ac3f5fd" (UID: "b5827a44-9073-412a-90ec-653b5ac3f5fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.049040 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5827a44-9073-412a-90ec-653b5ac3f5fd-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.079930 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.149901 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3e230bf0-bb21-469f-ad05-1d061026d73f-ring-data-devices\") pod \"3e230bf0-bb21-469f-ad05-1d061026d73f\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.150098 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t87f\" (UniqueName: \"kubernetes.io/projected/3e230bf0-bb21-469f-ad05-1d061026d73f-kube-api-access-4t87f\") pod \"3e230bf0-bb21-469f-ad05-1d061026d73f\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.150151 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e230bf0-bb21-469f-ad05-1d061026d73f-scripts\") pod \"3e230bf0-bb21-469f-ad05-1d061026d73f\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.150246 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3e230bf0-bb21-469f-ad05-1d061026d73f-dispersionconf\") pod \"3e230bf0-bb21-469f-ad05-1d061026d73f\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.150720 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e230bf0-bb21-469f-ad05-1d061026d73f-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "3e230bf0-bb21-469f-ad05-1d061026d73f" (UID: "3e230bf0-bb21-469f-ad05-1d061026d73f"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.150377 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3e230bf0-bb21-469f-ad05-1d061026d73f-etc-swift\") pod \"3e230bf0-bb21-469f-ad05-1d061026d73f\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.151040 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3e230bf0-bb21-469f-ad05-1d061026d73f-swiftconf\") pod \"3e230bf0-bb21-469f-ad05-1d061026d73f\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.151096 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e230bf0-bb21-469f-ad05-1d061026d73f-combined-ca-bundle\") pod \"3e230bf0-bb21-469f-ad05-1d061026d73f\" (UID: \"3e230bf0-bb21-469f-ad05-1d061026d73f\") " Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.153123 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e230bf0-bb21-469f-ad05-1d061026d73f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3e230bf0-bb21-469f-ad05-1d061026d73f" (UID: "3e230bf0-bb21-469f-ad05-1d061026d73f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.153957 4693 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3e230bf0-bb21-469f-ad05-1d061026d73f-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.153980 4693 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3e230bf0-bb21-469f-ad05-1d061026d73f-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.154723 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e230bf0-bb21-469f-ad05-1d061026d73f-kube-api-access-4t87f" (OuterVolumeSpecName: "kube-api-access-4t87f") pod "3e230bf0-bb21-469f-ad05-1d061026d73f" (UID: "3e230bf0-bb21-469f-ad05-1d061026d73f"). InnerVolumeSpecName "kube-api-access-4t87f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.160453 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e230bf0-bb21-469f-ad05-1d061026d73f-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "3e230bf0-bb21-469f-ad05-1d061026d73f" (UID: "3e230bf0-bb21-469f-ad05-1d061026d73f"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.179098 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e230bf0-bb21-469f-ad05-1d061026d73f-scripts" (OuterVolumeSpecName: "scripts") pod "3e230bf0-bb21-469f-ad05-1d061026d73f" (UID: "3e230bf0-bb21-469f-ad05-1d061026d73f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.191528 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e230bf0-bb21-469f-ad05-1d061026d73f-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "3e230bf0-bb21-469f-ad05-1d061026d73f" (UID: "3e230bf0-bb21-469f-ad05-1d061026d73f"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.196994 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e230bf0-bb21-469f-ad05-1d061026d73f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e230bf0-bb21-469f-ad05-1d061026d73f" (UID: "3e230bf0-bb21-469f-ad05-1d061026d73f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.255413 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t87f\" (UniqueName: \"kubernetes.io/projected/3e230bf0-bb21-469f-ad05-1d061026d73f-kube-api-access-4t87f\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.255451 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e230bf0-bb21-469f-ad05-1d061026d73f-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.255460 4693 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3e230bf0-bb21-469f-ad05-1d061026d73f-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.255471 4693 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3e230bf0-bb21-469f-ad05-1d061026d73f-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.255479 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e230bf0-bb21-469f-ad05-1d061026d73f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.353714 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"3d90eb59-c661-4bec-ac19-87304c2c6f00","Type":"ContainerStarted","Data":"ab656abfc2a38b46d29c6ac206e2baa8b35190f9bef323e090599e352e76a4f8"} Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.359755 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-b8xvf" Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.366918 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.370964 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-b8xvf" event={"ID":"3e230bf0-bb21-469f-ad05-1d061026d73f","Type":"ContainerDied","Data":"eb58a19cea89e495a7f07ee4ebe7f74cb28093c6dbfc510f3bd6b24bd1284eda"} Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.371036 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb58a19cea89e495a7f07ee4ebe7f74cb28093c6dbfc510f3bd6b24bd1284eda" Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.371050 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6cwfg" event={"ID":"b5827a44-9073-412a-90ec-653b5ac3f5fd","Type":"ContainerDied","Data":"e2024a156dca85df3754725ff427c2d1e4ec7f6c85d942da702ed7961c0f1438"} Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.371076 4693 scope.go:117] "RemoveContainer" containerID="fecdd48ca99acb8b9346023cbbbfcb64889582cae9090ca80a85da8870bf3172" Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.398481 4693 scope.go:117] "RemoveContainer" containerID="91721f41cfc8320be57837e7006b01189b808edb4067dfabe9a9afe731c3233b" Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.425397 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6cwfg"] Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.435668 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6cwfg"] Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.544491 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-tzjth" podUID="636fcc75-4f63-4bf9-bcfe-8d0720896f25" containerName="ovn-controller" probeResult="failure" output=< Dec 12 16:11:25 crc kubenswrapper[4693]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 12 16:11:25 crc kubenswrapper[4693]: > Dec 12 16:11:25 crc kubenswrapper[4693]: I1212 16:11:25.951605 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:11:26 crc kubenswrapper[4693]: I1212 16:11:26.072923 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-etc-swift\") pod \"swift-storage-0\" (UID: \"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c\") " pod="openstack/swift-storage-0" Dec 12 16:11:26 crc kubenswrapper[4693]: I1212 16:11:26.084291 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39fd15fe-bbdd-49d4-95cc-70049f5b8d3c-etc-swift\") pod \"swift-storage-0\" (UID: \"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c\") " pod="openstack/swift-storage-0" Dec 12 16:11:26 crc kubenswrapper[4693]: I1212 16:11:26.241228 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 12 16:11:27 crc kubenswrapper[4693]: I1212 16:11:27.134761 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 12 16:11:27 crc kubenswrapper[4693]: W1212 16:11:27.136678 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39fd15fe_bbdd_49d4_95cc_70049f5b8d3c.slice/crio-32f4fe2ab8cbae6bd90ae95ac0de165721ae9f488467e00607a7e85544531319 WatchSource:0}: Error finding container 32f4fe2ab8cbae6bd90ae95ac0de165721ae9f488467e00607a7e85544531319: Status 404 returned error can't find the container with id 32f4fe2ab8cbae6bd90ae95ac0de165721ae9f488467e00607a7e85544531319 Dec 12 16:11:27 crc kubenswrapper[4693]: I1212 16:11:27.245076 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 12 16:11:27 crc kubenswrapper[4693]: I1212 16:11:27.371143 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5827a44-9073-412a-90ec-653b5ac3f5fd" path="/var/lib/kubelet/pods/b5827a44-9073-412a-90ec-653b5ac3f5fd/volumes" Dec 12 16:11:27 crc kubenswrapper[4693]: I1212 16:11:27.393377 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c","Type":"ContainerStarted","Data":"32f4fe2ab8cbae6bd90ae95ac0de165721ae9f488467e00607a7e85544531319"} Dec 12 16:11:27 crc kubenswrapper[4693]: I1212 16:11:27.393535 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="3fe5a970-1de1-4166-815b-81097dfe20ce" containerName="prometheus" containerID="cri-o://324679e43234b942e5d24cd9684a27d2ac0d2a121c40f883aa2c7d20f4b41c76" gracePeriod=600 Dec 12 16:11:27 crc kubenswrapper[4693]: I1212 16:11:27.393970 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="3fe5a970-1de1-4166-815b-81097dfe20ce" containerName="thanos-sidecar" containerID="cri-o://1c41d09fc575116b1ddb907fbc3208b21a8dbdfe93cf53a679d199cb01fb42a6" gracePeriod=600 Dec 12 16:11:27 crc kubenswrapper[4693]: I1212 16:11:27.394035 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="3fe5a970-1de1-4166-815b-81097dfe20ce" containerName="config-reloader" containerID="cri-o://21f42779038eb14690b3f96c1c10191dc464d901e3c445bc82fac04b3ac04ad7" gracePeriod=600 Dec 12 16:11:27 crc kubenswrapper[4693]: I1212 16:11:27.509015 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="d45363e2-3684-4fc6-b322-d99e6e87d3fd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Dec 12 16:11:27 crc kubenswrapper[4693]: I1212 16:11:27.519973 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="62a37a53-6f53-4b51-b493-edfdb42c3a93" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Dec 12 16:11:27 crc kubenswrapper[4693]: I1212 16:11:27.789903 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2d1046a8-e83f-4c4f-8ac3-1110bb6f62db" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Dec 12 16:11:27 crc kubenswrapper[4693]: I1212 16:11:27.866990 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="6fd6556d-68c5-4492-804c-bc3188ab39b7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.428497 4693 generic.go:334] "Generic (PLEG): container finished" podID="3fe5a970-1de1-4166-815b-81097dfe20ce" containerID="1c41d09fc575116b1ddb907fbc3208b21a8dbdfe93cf53a679d199cb01fb42a6" exitCode=0 Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.428528 4693 generic.go:334] "Generic (PLEG): container finished" podID="3fe5a970-1de1-4166-815b-81097dfe20ce" containerID="21f42779038eb14690b3f96c1c10191dc464d901e3c445bc82fac04b3ac04ad7" exitCode=0 Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.428537 4693 generic.go:334] "Generic (PLEG): container finished" podID="3fe5a970-1de1-4166-815b-81097dfe20ce" containerID="324679e43234b942e5d24cd9684a27d2ac0d2a121c40f883aa2c7d20f4b41c76" exitCode=0 Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.428557 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3fe5a970-1de1-4166-815b-81097dfe20ce","Type":"ContainerDied","Data":"1c41d09fc575116b1ddb907fbc3208b21a8dbdfe93cf53a679d199cb01fb42a6"} Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.428583 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3fe5a970-1de1-4166-815b-81097dfe20ce","Type":"ContainerDied","Data":"21f42779038eb14690b3f96c1c10191dc464d901e3c445bc82fac04b3ac04ad7"} Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.428593 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3fe5a970-1de1-4166-815b-81097dfe20ce","Type":"ContainerDied","Data":"324679e43234b942e5d24cd9684a27d2ac0d2a121c40f883aa2c7d20f4b41c76"} Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.762012 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.839983 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0\") pod \"3fe5a970-1de1-4166-815b-81097dfe20ce\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.840088 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3fe5a970-1de1-4166-815b-81097dfe20ce-tls-assets\") pod \"3fe5a970-1de1-4166-815b-81097dfe20ce\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.840119 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3fe5a970-1de1-4166-815b-81097dfe20ce-web-config\") pod \"3fe5a970-1de1-4166-815b-81097dfe20ce\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.840167 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3fe5a970-1de1-4166-815b-81097dfe20ce-prometheus-metric-storage-rulefiles-0\") pod \"3fe5a970-1de1-4166-815b-81097dfe20ce\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.840228 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3fe5a970-1de1-4166-815b-81097dfe20ce-config\") pod \"3fe5a970-1de1-4166-815b-81097dfe20ce\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.840299 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jcx6\" (UniqueName: \"kubernetes.io/projected/3fe5a970-1de1-4166-815b-81097dfe20ce-kube-api-access-8jcx6\") pod \"3fe5a970-1de1-4166-815b-81097dfe20ce\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.840357 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3fe5a970-1de1-4166-815b-81097dfe20ce-thanos-prometheus-http-client-file\") pod \"3fe5a970-1de1-4166-815b-81097dfe20ce\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.840388 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3fe5a970-1de1-4166-815b-81097dfe20ce-config-out\") pod \"3fe5a970-1de1-4166-815b-81097dfe20ce\" (UID: \"3fe5a970-1de1-4166-815b-81097dfe20ce\") " Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.842706 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fe5a970-1de1-4166-815b-81097dfe20ce-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "3fe5a970-1de1-4166-815b-81097dfe20ce" (UID: "3fe5a970-1de1-4166-815b-81097dfe20ce"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.853287 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fe5a970-1de1-4166-815b-81097dfe20ce-config-out" (OuterVolumeSpecName: "config-out") pod "3fe5a970-1de1-4166-815b-81097dfe20ce" (UID: "3fe5a970-1de1-4166-815b-81097dfe20ce"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.853954 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fe5a970-1de1-4166-815b-81097dfe20ce-config" (OuterVolumeSpecName: "config") pod "3fe5a970-1de1-4166-815b-81097dfe20ce" (UID: "3fe5a970-1de1-4166-815b-81097dfe20ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.854963 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fe5a970-1de1-4166-815b-81097dfe20ce-kube-api-access-8jcx6" (OuterVolumeSpecName: "kube-api-access-8jcx6") pod "3fe5a970-1de1-4166-815b-81097dfe20ce" (UID: "3fe5a970-1de1-4166-815b-81097dfe20ce"). InnerVolumeSpecName "kube-api-access-8jcx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.857349 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fe5a970-1de1-4166-815b-81097dfe20ce-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "3fe5a970-1de1-4166-815b-81097dfe20ce" (UID: "3fe5a970-1de1-4166-815b-81097dfe20ce"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.858000 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fe5a970-1de1-4166-815b-81097dfe20ce-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "3fe5a970-1de1-4166-815b-81097dfe20ce" (UID: "3fe5a970-1de1-4166-815b-81097dfe20ce"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.879888 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "3fe5a970-1de1-4166-815b-81097dfe20ce" (UID: "3fe5a970-1de1-4166-815b-81097dfe20ce"). InnerVolumeSpecName "pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.898979 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fe5a970-1de1-4166-815b-81097dfe20ce-web-config" (OuterVolumeSpecName: "web-config") pod "3fe5a970-1de1-4166-815b-81097dfe20ce" (UID: "3fe5a970-1de1-4166-815b-81097dfe20ce"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.937055 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.944111 4693 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3fe5a970-1de1-4166-815b-81097dfe20ce-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.944148 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3fe5a970-1de1-4166-815b-81097dfe20ce-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.944163 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jcx6\" (UniqueName: \"kubernetes.io/projected/3fe5a970-1de1-4166-815b-81097dfe20ce-kube-api-access-8jcx6\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.944177 4693 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3fe5a970-1de1-4166-815b-81097dfe20ce-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.944189 4693 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3fe5a970-1de1-4166-815b-81097dfe20ce-config-out\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.944244 4693 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0\") on node \"crc\" " Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.944261 4693 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3fe5a970-1de1-4166-815b-81097dfe20ce-tls-assets\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.944291 4693 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3fe5a970-1de1-4166-815b-81097dfe20ce-web-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.986211 4693 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 12 16:11:28 crc kubenswrapper[4693]: I1212 16:11:28.986446 4693 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0") on node "crc" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.046255 4693 reconciler_common.go:293] "Volume detached for volume \"pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.443551 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"3d90eb59-c661-4bec-ac19-87304c2c6f00","Type":"ContainerStarted","Data":"b78c3167bff48803366947006054424c261bc302dbca930894845a200b7f9e1b"} Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.457879 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c","Type":"ContainerStarted","Data":"b16bb227522229b7e1d02a1d86d0f0a2ded36056f5b2e608b781aeda4cb2e1d5"} Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.457935 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c","Type":"ContainerStarted","Data":"00f9c4af0915eb7c77001de4dbe1745e6425dfc349a2c6279e8d271dcd5a1dc5"} Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.457967 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c","Type":"ContainerStarted","Data":"2865cbda03a774415d89aa6122aa0a90a44bb6cd8194af93d561b07671c8d455"} Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.461034 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3fe5a970-1de1-4166-815b-81097dfe20ce","Type":"ContainerDied","Data":"35b30b29b7da8a025aa9088941729352dc4ae3db7b6add81767fce39f440ef00"} Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.461083 4693 scope.go:117] "RemoveContainer" containerID="1c41d09fc575116b1ddb907fbc3208b21a8dbdfe93cf53a679d199cb01fb42a6" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.461208 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.479640 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.628563911 podStartE2EDuration="6.479618223s" podCreationTimestamp="2025-12-12 16:11:23 +0000 UTC" firstStartedPulling="2025-12-12 16:11:24.818770147 +0000 UTC m=+1511.987409748" lastFinishedPulling="2025-12-12 16:11:28.669824459 +0000 UTC m=+1515.838464060" observedRunningTime="2025-12-12 16:11:29.473299873 +0000 UTC m=+1516.641939484" watchObservedRunningTime="2025-12-12 16:11:29.479618223 +0000 UTC m=+1516.648257824" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.544487 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.567748 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.571484 4693 scope.go:117] "RemoveContainer" containerID="21f42779038eb14690b3f96c1c10191dc464d901e3c445bc82fac04b3ac04ad7" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.595012 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 12 16:11:29 crc kubenswrapper[4693]: E1212 16:11:29.595573 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5827a44-9073-412a-90ec-653b5ac3f5fd" containerName="init" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.595592 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5827a44-9073-412a-90ec-653b5ac3f5fd" containerName="init" Dec 12 16:11:29 crc kubenswrapper[4693]: E1212 16:11:29.595619 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fe5a970-1de1-4166-815b-81097dfe20ce" containerName="prometheus" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.595625 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fe5a970-1de1-4166-815b-81097dfe20ce" containerName="prometheus" Dec 12 16:11:29 crc kubenswrapper[4693]: E1212 16:11:29.595643 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fe5a970-1de1-4166-815b-81097dfe20ce" containerName="config-reloader" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.595648 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fe5a970-1de1-4166-815b-81097dfe20ce" containerName="config-reloader" Dec 12 16:11:29 crc kubenswrapper[4693]: E1212 16:11:29.595660 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fe5a970-1de1-4166-815b-81097dfe20ce" containerName="thanos-sidecar" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.595666 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fe5a970-1de1-4166-815b-81097dfe20ce" containerName="thanos-sidecar" Dec 12 16:11:29 crc kubenswrapper[4693]: E1212 16:11:29.595688 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e230bf0-bb21-469f-ad05-1d061026d73f" containerName="swift-ring-rebalance" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.595697 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e230bf0-bb21-469f-ad05-1d061026d73f" containerName="swift-ring-rebalance" Dec 12 16:11:29 crc kubenswrapper[4693]: E1212 16:11:29.595709 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5827a44-9073-412a-90ec-653b5ac3f5fd" containerName="dnsmasq-dns" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.595716 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5827a44-9073-412a-90ec-653b5ac3f5fd" containerName="dnsmasq-dns" Dec 12 16:11:29 crc kubenswrapper[4693]: E1212 16:11:29.595722 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fe5a970-1de1-4166-815b-81097dfe20ce" containerName="init-config-reloader" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.595727 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fe5a970-1de1-4166-815b-81097dfe20ce" containerName="init-config-reloader" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.595945 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e230bf0-bb21-469f-ad05-1d061026d73f" containerName="swift-ring-rebalance" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.595967 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5827a44-9073-412a-90ec-653b5ac3f5fd" containerName="dnsmasq-dns" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.595986 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fe5a970-1de1-4166-815b-81097dfe20ce" containerName="thanos-sidecar" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.595998 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fe5a970-1de1-4166-815b-81097dfe20ce" containerName="prometheus" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.596007 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fe5a970-1de1-4166-815b-81097dfe20ce" containerName="config-reloader" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.597973 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.602531 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.602703 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-knjnx" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.606390 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.606603 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.606779 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.607630 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.620717 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.631638 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.647138 4693 scope.go:117] "RemoveContainer" containerID="324679e43234b942e5d24cd9684a27d2ac0d2a121c40f883aa2c7d20f4b41c76" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.665148 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03515034-0f60-4e96-b2cc-9784f6e07887-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.665221 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/03515034-0f60-4e96-b2cc-9784f6e07887-config\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.665299 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/03515034-0f60-4e96-b2cc-9784f6e07887-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.665323 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/03515034-0f60-4e96-b2cc-9784f6e07887-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.665344 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/03515034-0f60-4e96-b2cc-9784f6e07887-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.665412 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.665431 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/03515034-0f60-4e96-b2cc-9784f6e07887-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.665447 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnw47\" (UniqueName: \"kubernetes.io/projected/03515034-0f60-4e96-b2cc-9784f6e07887-kube-api-access-gnw47\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.665481 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/03515034-0f60-4e96-b2cc-9784f6e07887-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.665498 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/03515034-0f60-4e96-b2cc-9784f6e07887-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.665525 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/03515034-0f60-4e96-b2cc-9784f6e07887-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.686480 4693 scope.go:117] "RemoveContainer" containerID="5a0f8541c5a51323408ef686b59150c1f68f5a9887ed62a424c2c47b92efa9ad" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.767593 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/03515034-0f60-4e96-b2cc-9784f6e07887-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.767638 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/03515034-0f60-4e96-b2cc-9784f6e07887-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.767674 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/03515034-0f60-4e96-b2cc-9784f6e07887-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.767730 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03515034-0f60-4e96-b2cc-9784f6e07887-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.767772 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/03515034-0f60-4e96-b2cc-9784f6e07887-config\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.767810 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/03515034-0f60-4e96-b2cc-9784f6e07887-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.767831 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/03515034-0f60-4e96-b2cc-9784f6e07887-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.767852 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/03515034-0f60-4e96-b2cc-9784f6e07887-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.767917 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.767934 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/03515034-0f60-4e96-b2cc-9784f6e07887-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.767952 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnw47\" (UniqueName: \"kubernetes.io/projected/03515034-0f60-4e96-b2cc-9784f6e07887-kube-api-access-gnw47\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.770149 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/03515034-0f60-4e96-b2cc-9784f6e07887-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.774902 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/03515034-0f60-4e96-b2cc-9784f6e07887-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.775018 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/03515034-0f60-4e96-b2cc-9784f6e07887-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.775094 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/03515034-0f60-4e96-b2cc-9784f6e07887-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.775465 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/03515034-0f60-4e96-b2cc-9784f6e07887-config\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.775680 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/03515034-0f60-4e96-b2cc-9784f6e07887-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.776395 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.776540 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/616b5aed7df61e2529192d0a714d9bffe231d3fa4b75c2ae3a6ad54f9059d388/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.776763 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03515034-0f60-4e96-b2cc-9784f6e07887-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.777529 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/03515034-0f60-4e96-b2cc-9784f6e07887-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.783754 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/03515034-0f60-4e96-b2cc-9784f6e07887-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.803630 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnw47\" (UniqueName: \"kubernetes.io/projected/03515034-0f60-4e96-b2cc-9784f6e07887-kube-api-access-gnw47\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.841201 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ab59d63-289c-4c11-86f0-8b9c780b35c0\") pod \"prometheus-metric-storage-0\" (UID: \"03515034-0f60-4e96-b2cc-9784f6e07887\") " pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:29 crc kubenswrapper[4693]: I1212 16:11:29.922429 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:30 crc kubenswrapper[4693]: I1212 16:11:30.410747 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 12 16:11:30 crc kubenswrapper[4693]: I1212 16:11:30.477678 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03515034-0f60-4e96-b2cc-9784f6e07887","Type":"ContainerStarted","Data":"b63b98403bb57fb7439f94e57a5336846db5b6d0f097f0f22bfc59a2fe6058df"} Dec 12 16:11:30 crc kubenswrapper[4693]: I1212 16:11:30.489982 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c","Type":"ContainerStarted","Data":"39f134c7d5626ac8bb71d8875a8f8bd7377535a70ee57048bcf7b3dcd6e14536"} Dec 12 16:11:30 crc kubenswrapper[4693]: I1212 16:11:30.537832 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-tzjth" podUID="636fcc75-4f63-4bf9-bcfe-8d0720896f25" containerName="ovn-controller" probeResult="failure" output=< Dec 12 16:11:30 crc kubenswrapper[4693]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 12 16:11:30 crc kubenswrapper[4693]: > Dec 12 16:11:30 crc kubenswrapper[4693]: I1212 16:11:30.899953 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-296qr" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.124416 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-tzjth-config-67lmk"] Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.127701 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tzjth-config-67lmk" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.135579 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.143811 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-tzjth-config-67lmk"] Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.216607 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ad71429b-6529-4902-894f-f27244f61dd3-scripts\") pod \"ovn-controller-tzjth-config-67lmk\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " pod="openstack/ovn-controller-tzjth-config-67lmk" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.216660 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xks8m\" (UniqueName: \"kubernetes.io/projected/ad71429b-6529-4902-894f-f27244f61dd3-kube-api-access-xks8m\") pod \"ovn-controller-tzjth-config-67lmk\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " pod="openstack/ovn-controller-tzjth-config-67lmk" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.216726 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad71429b-6529-4902-894f-f27244f61dd3-var-run-ovn\") pod \"ovn-controller-tzjth-config-67lmk\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " pod="openstack/ovn-controller-tzjth-config-67lmk" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.216760 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ad71429b-6529-4902-894f-f27244f61dd3-var-log-ovn\") pod \"ovn-controller-tzjth-config-67lmk\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " pod="openstack/ovn-controller-tzjth-config-67lmk" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.216818 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ad71429b-6529-4902-894f-f27244f61dd3-var-run\") pod \"ovn-controller-tzjth-config-67lmk\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " pod="openstack/ovn-controller-tzjth-config-67lmk" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.216832 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ad71429b-6529-4902-894f-f27244f61dd3-additional-scripts\") pod \"ovn-controller-tzjth-config-67lmk\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " pod="openstack/ovn-controller-tzjth-config-67lmk" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.318083 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ad71429b-6529-4902-894f-f27244f61dd3-scripts\") pod \"ovn-controller-tzjth-config-67lmk\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " pod="openstack/ovn-controller-tzjth-config-67lmk" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.318468 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xks8m\" (UniqueName: \"kubernetes.io/projected/ad71429b-6529-4902-894f-f27244f61dd3-kube-api-access-xks8m\") pod \"ovn-controller-tzjth-config-67lmk\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " pod="openstack/ovn-controller-tzjth-config-67lmk" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.318545 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad71429b-6529-4902-894f-f27244f61dd3-var-run-ovn\") pod \"ovn-controller-tzjth-config-67lmk\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " pod="openstack/ovn-controller-tzjth-config-67lmk" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.318585 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ad71429b-6529-4902-894f-f27244f61dd3-var-log-ovn\") pod \"ovn-controller-tzjth-config-67lmk\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " pod="openstack/ovn-controller-tzjth-config-67lmk" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.318641 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ad71429b-6529-4902-894f-f27244f61dd3-var-run\") pod \"ovn-controller-tzjth-config-67lmk\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " pod="openstack/ovn-controller-tzjth-config-67lmk" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.318659 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ad71429b-6529-4902-894f-f27244f61dd3-additional-scripts\") pod \"ovn-controller-tzjth-config-67lmk\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " pod="openstack/ovn-controller-tzjth-config-67lmk" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.319337 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ad71429b-6529-4902-894f-f27244f61dd3-additional-scripts\") pod \"ovn-controller-tzjth-config-67lmk\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " pod="openstack/ovn-controller-tzjth-config-67lmk" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.319874 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad71429b-6529-4902-894f-f27244f61dd3-var-run-ovn\") pod \"ovn-controller-tzjth-config-67lmk\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " pod="openstack/ovn-controller-tzjth-config-67lmk" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.319944 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ad71429b-6529-4902-894f-f27244f61dd3-var-log-ovn\") pod \"ovn-controller-tzjth-config-67lmk\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " pod="openstack/ovn-controller-tzjth-config-67lmk" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.319984 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ad71429b-6529-4902-894f-f27244f61dd3-var-run\") pod \"ovn-controller-tzjth-config-67lmk\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " pod="openstack/ovn-controller-tzjth-config-67lmk" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.321085 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ad71429b-6529-4902-894f-f27244f61dd3-scripts\") pod \"ovn-controller-tzjth-config-67lmk\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " pod="openstack/ovn-controller-tzjth-config-67lmk" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.344877 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xks8m\" (UniqueName: \"kubernetes.io/projected/ad71429b-6529-4902-894f-f27244f61dd3-kube-api-access-xks8m\") pod \"ovn-controller-tzjth-config-67lmk\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " pod="openstack/ovn-controller-tzjth-config-67lmk" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.369423 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fe5a970-1de1-4166-815b-81097dfe20ce" path="/var/lib/kubelet/pods/3fe5a970-1de1-4166-815b-81097dfe20ce/volumes" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.465350 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tzjth-config-67lmk" Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.520925 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c","Type":"ContainerStarted","Data":"5e1dbd6eecc127544ffaf450a2d0f6ffeaf3e10064012169505d891d596548ee"} Dec 12 16:11:31 crc kubenswrapper[4693]: I1212 16:11:31.522006 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c","Type":"ContainerStarted","Data":"17351e6dfd013fe4325290ea3bf2af4744b2415f7148518563e2363f65191324"} Dec 12 16:11:32 crc kubenswrapper[4693]: I1212 16:11:32.061253 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-tzjth-config-67lmk"] Dec 12 16:11:32 crc kubenswrapper[4693]: I1212 16:11:32.531668 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tzjth-config-67lmk" event={"ID":"ad71429b-6529-4902-894f-f27244f61dd3","Type":"ContainerStarted","Data":"45f86458d7ced2cc1adfd236c06901c31c1c240d73843e2b17cf1fed70c6ffc0"} Dec 12 16:11:34 crc kubenswrapper[4693]: I1212 16:11:34.572205 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c","Type":"ContainerStarted","Data":"57ce77951f2308dbb72de8cd72493c01f1bd9f6c1fcf432a31ff7575f224f5e1"} Dec 12 16:11:34 crc kubenswrapper[4693]: I1212 16:11:34.574317 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03515034-0f60-4e96-b2cc-9784f6e07887","Type":"ContainerStarted","Data":"1c4ff5c6fe3562455281aeb95a7681d2e3025447fba303b4b7069cede64693f8"} Dec 12 16:11:34 crc kubenswrapper[4693]: I1212 16:11:34.578449 4693 generic.go:334] "Generic (PLEG): container finished" podID="ad71429b-6529-4902-894f-f27244f61dd3" containerID="70974cf4b283d0d38da16dc0229eb470987c19b127323745e0297d03d487d4d1" exitCode=0 Dec 12 16:11:34 crc kubenswrapper[4693]: I1212 16:11:34.578500 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tzjth-config-67lmk" event={"ID":"ad71429b-6529-4902-894f-f27244f61dd3","Type":"ContainerDied","Data":"70974cf4b283d0d38da16dc0229eb470987c19b127323745e0297d03d487d4d1"} Dec 12 16:11:35 crc kubenswrapper[4693]: I1212 16:11:35.532168 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-tzjth" Dec 12 16:11:37 crc kubenswrapper[4693]: I1212 16:11:37.510484 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Dec 12 16:11:37 crc kubenswrapper[4693]: I1212 16:11:37.516822 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="62a37a53-6f53-4b51-b493-edfdb42c3a93" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Dec 12 16:11:37 crc kubenswrapper[4693]: I1212 16:11:37.790014 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 12 16:11:37 crc kubenswrapper[4693]: I1212 16:11:37.867937 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:11:41 crc kubenswrapper[4693]: I1212 16:11:41.707884 4693 generic.go:334] "Generic (PLEG): container finished" podID="03515034-0f60-4e96-b2cc-9784f6e07887" containerID="1c4ff5c6fe3562455281aeb95a7681d2e3025447fba303b4b7069cede64693f8" exitCode=0 Dec 12 16:11:41 crc kubenswrapper[4693]: I1212 16:11:41.708427 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03515034-0f60-4e96-b2cc-9784f6e07887","Type":"ContainerDied","Data":"1c4ff5c6fe3562455281aeb95a7681d2e3025447fba303b4b7069cede64693f8"} Dec 12 16:11:42 crc kubenswrapper[4693]: E1212 16:11:42.161675 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Dec 12 16:11:42 crc kubenswrapper[4693]: E1212 16:11:42.161863 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptpvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-r9p8t_openstack(56fb0f10-fbce-4aed-9a10-7128021ce48f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:11:42 crc kubenswrapper[4693]: E1212 16:11:42.163093 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-r9p8t" podUID="56fb0f10-fbce-4aed-9a10-7128021ce48f" Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.299578 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tzjth-config-67lmk" Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.402412 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xks8m\" (UniqueName: \"kubernetes.io/projected/ad71429b-6529-4902-894f-f27244f61dd3-kube-api-access-xks8m\") pod \"ad71429b-6529-4902-894f-f27244f61dd3\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.402468 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ad71429b-6529-4902-894f-f27244f61dd3-var-run\") pod \"ad71429b-6529-4902-894f-f27244f61dd3\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.402613 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad71429b-6529-4902-894f-f27244f61dd3-var-run" (OuterVolumeSpecName: "var-run") pod "ad71429b-6529-4902-894f-f27244f61dd3" (UID: "ad71429b-6529-4902-894f-f27244f61dd3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.402681 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ad71429b-6529-4902-894f-f27244f61dd3-var-log-ovn\") pod \"ad71429b-6529-4902-894f-f27244f61dd3\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.402808 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad71429b-6529-4902-894f-f27244f61dd3-var-run-ovn\") pod \"ad71429b-6529-4902-894f-f27244f61dd3\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.402845 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ad71429b-6529-4902-894f-f27244f61dd3-additional-scripts\") pod \"ad71429b-6529-4902-894f-f27244f61dd3\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.402853 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad71429b-6529-4902-894f-f27244f61dd3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ad71429b-6529-4902-894f-f27244f61dd3" (UID: "ad71429b-6529-4902-894f-f27244f61dd3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.402887 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad71429b-6529-4902-894f-f27244f61dd3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ad71429b-6529-4902-894f-f27244f61dd3" (UID: "ad71429b-6529-4902-894f-f27244f61dd3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.402893 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ad71429b-6529-4902-894f-f27244f61dd3-scripts\") pod \"ad71429b-6529-4902-894f-f27244f61dd3\" (UID: \"ad71429b-6529-4902-894f-f27244f61dd3\") " Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.403481 4693 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ad71429b-6529-4902-894f-f27244f61dd3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.403498 4693 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad71429b-6529-4902-894f-f27244f61dd3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.403511 4693 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ad71429b-6529-4902-894f-f27244f61dd3-var-run\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.403646 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad71429b-6529-4902-894f-f27244f61dd3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ad71429b-6529-4902-894f-f27244f61dd3" (UID: "ad71429b-6529-4902-894f-f27244f61dd3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.405950 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad71429b-6529-4902-894f-f27244f61dd3-scripts" (OuterVolumeSpecName: "scripts") pod "ad71429b-6529-4902-894f-f27244f61dd3" (UID: "ad71429b-6529-4902-894f-f27244f61dd3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.420037 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad71429b-6529-4902-894f-f27244f61dd3-kube-api-access-xks8m" (OuterVolumeSpecName: "kube-api-access-xks8m") pod "ad71429b-6529-4902-894f-f27244f61dd3" (UID: "ad71429b-6529-4902-894f-f27244f61dd3"). InnerVolumeSpecName "kube-api-access-xks8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.504966 4693 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ad71429b-6529-4902-894f-f27244f61dd3-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.505239 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ad71429b-6529-4902-894f-f27244f61dd3-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.505250 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xks8m\" (UniqueName: \"kubernetes.io/projected/ad71429b-6529-4902-894f-f27244f61dd3-kube-api-access-xks8m\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.722442 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03515034-0f60-4e96-b2cc-9784f6e07887","Type":"ContainerStarted","Data":"6dbc578984d491f8327949d9941e2bf00a5654d5ded55dddc7856d87c34ec166"} Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.724478 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tzjth-config-67lmk" event={"ID":"ad71429b-6529-4902-894f-f27244f61dd3","Type":"ContainerDied","Data":"45f86458d7ced2cc1adfd236c06901c31c1c240d73843e2b17cf1fed70c6ffc0"} Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.724517 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tzjth-config-67lmk" Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.724526 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45f86458d7ced2cc1adfd236c06901c31c1c240d73843e2b17cf1fed70c6ffc0" Dec 12 16:11:42 crc kubenswrapper[4693]: I1212 16:11:42.728558 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c","Type":"ContainerStarted","Data":"7040cf306479e1a8ce397365c6ced98265efa7f7e8f7df044d54495da1ad8877"} Dec 12 16:11:42 crc kubenswrapper[4693]: E1212 16:11:42.730122 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-r9p8t" podUID="56fb0f10-fbce-4aed-9a10-7128021ce48f" Dec 12 16:11:43 crc kubenswrapper[4693]: I1212 16:11:43.411032 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-tzjth-config-67lmk"] Dec 12 16:11:43 crc kubenswrapper[4693]: I1212 16:11:43.425754 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-tzjth-config-67lmk"] Dec 12 16:11:44 crc kubenswrapper[4693]: I1212 16:11:44.755985 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c","Type":"ContainerStarted","Data":"3411ca6d5cfc8bc0e494b211fca14ebdcda9382834cf0344ac592469c8f5f10c"} Dec 12 16:11:44 crc kubenswrapper[4693]: I1212 16:11:44.756346 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c","Type":"ContainerStarted","Data":"3bfd12cd482f52d1c915660afa73cf35945b89ce91d49b03011850f3744761b5"} Dec 12 16:11:44 crc kubenswrapper[4693]: I1212 16:11:44.756366 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c","Type":"ContainerStarted","Data":"ae551f20237847abda9d59bc6f2ce1d7177407623cad83f7e2f426ffb945dcd8"} Dec 12 16:11:45 crc kubenswrapper[4693]: I1212 16:11:45.369833 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad71429b-6529-4902-894f-f27244f61dd3" path="/var/lib/kubelet/pods/ad71429b-6529-4902-894f-f27244f61dd3/volumes" Dec 12 16:11:45 crc kubenswrapper[4693]: I1212 16:11:45.771836 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c","Type":"ContainerStarted","Data":"fbc27ca51b71caba7eff70ad4b9d9286c5f520293124bdaf08c813ee5ed56da8"} Dec 12 16:11:45 crc kubenswrapper[4693]: I1212 16:11:45.771898 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c","Type":"ContainerStarted","Data":"9e06d29859d84ca1d032147d95a66e793d7df27666c05cdb4b1d969bf99dcfa8"} Dec 12 16:11:45 crc kubenswrapper[4693]: I1212 16:11:45.771913 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c","Type":"ContainerStarted","Data":"9d7453609c00daac9620acd2b1d716511ca551d3656a6c1f79e798a042c9886d"} Dec 12 16:11:45 crc kubenswrapper[4693]: I1212 16:11:45.775446 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03515034-0f60-4e96-b2cc-9784f6e07887","Type":"ContainerStarted","Data":"1e283ea21ce7654db978edf841e985bf26796c328199572e7d18cdb3ab2e534a"} Dec 12 16:11:45 crc kubenswrapper[4693]: I1212 16:11:45.775480 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03515034-0f60-4e96-b2cc-9784f6e07887","Type":"ContainerStarted","Data":"08c36593fae72869de19a832f613617a974a9d6e79b3f8d7fa05c4967faf2d5e"} Dec 12 16:11:45 crc kubenswrapper[4693]: I1212 16:11:45.809667 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.809646073 podStartE2EDuration="16.809646073s" podCreationTimestamp="2025-12-12 16:11:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:11:45.797715672 +0000 UTC m=+1532.966355303" watchObservedRunningTime="2025-12-12 16:11:45.809646073 +0000 UTC m=+1532.978285674" Dec 12 16:11:46 crc kubenswrapper[4693]: I1212 16:11:46.793012 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"39fd15fe-bbdd-49d4-95cc-70049f5b8d3c","Type":"ContainerStarted","Data":"bbbb33afe0bca4dfbb4c8f6c849ff91405b603cfaa9b27846922183852e63fee"} Dec 12 16:11:46 crc kubenswrapper[4693]: I1212 16:11:46.838848 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.267391583 podStartE2EDuration="53.838822398s" podCreationTimestamp="2025-12-12 16:10:53 +0000 UTC" firstStartedPulling="2025-12-12 16:11:27.14195873 +0000 UTC m=+1514.310598331" lastFinishedPulling="2025-12-12 16:11:43.713389535 +0000 UTC m=+1530.882029146" observedRunningTime="2025-12-12 16:11:46.831223324 +0000 UTC m=+1533.999862945" watchObservedRunningTime="2025-12-12 16:11:46.838822398 +0000 UTC m=+1534.007461999" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.129819 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-drvh6"] Dec 12 16:11:47 crc kubenswrapper[4693]: E1212 16:11:47.130342 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad71429b-6529-4902-894f-f27244f61dd3" containerName="ovn-config" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.130362 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad71429b-6529-4902-894f-f27244f61dd3" containerName="ovn-config" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.130630 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad71429b-6529-4902-894f-f27244f61dd3" containerName="ovn-config" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.131891 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.134360 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.156778 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-drvh6"] Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.199513 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-drvh6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.199655 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwnlw\" (UniqueName: \"kubernetes.io/projected/0545ccf6-794f-4a89-912a-0e07df8534f6-kube-api-access-nwnlw\") pod \"dnsmasq-dns-764c5664d7-drvh6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.199690 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-config\") pod \"dnsmasq-dns-764c5664d7-drvh6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.199716 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-dns-svc\") pod \"dnsmasq-dns-764c5664d7-drvh6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.199970 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-drvh6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.200030 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-drvh6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.302031 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-drvh6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.302153 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwnlw\" (UniqueName: \"kubernetes.io/projected/0545ccf6-794f-4a89-912a-0e07df8534f6-kube-api-access-nwnlw\") pod \"dnsmasq-dns-764c5664d7-drvh6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.302181 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-config\") pod \"dnsmasq-dns-764c5664d7-drvh6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.302206 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-dns-svc\") pod \"dnsmasq-dns-764c5664d7-drvh6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.302299 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-drvh6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.302327 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-drvh6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.303199 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-config\") pod \"dnsmasq-dns-764c5664d7-drvh6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.303259 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-drvh6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.303305 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-drvh6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.303375 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-drvh6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.303837 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-dns-svc\") pod \"dnsmasq-dns-764c5664d7-drvh6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.323135 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwnlw\" (UniqueName: \"kubernetes.io/projected/0545ccf6-794f-4a89-912a-0e07df8534f6-kube-api-access-nwnlw\") pod \"dnsmasq-dns-764c5664d7-drvh6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.448464 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.518679 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.938068 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-nxx54"] Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.940173 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nxx54" Dec 12 16:11:47 crc kubenswrapper[4693]: I1212 16:11:47.956344 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-nxx54"] Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.023964 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsx4l\" (UniqueName: \"kubernetes.io/projected/2f2d3ae1-1fd0-4058-9723-e136c74ee739-kube-api-access-hsx4l\") pod \"cinder-db-create-nxx54\" (UID: \"2f2d3ae1-1fd0-4058-9723-e136c74ee739\") " pod="openstack/cinder-db-create-nxx54" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.024219 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-0a52-account-create-update-6f92x"] Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.024632 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f2d3ae1-1fd0-4058-9723-e136c74ee739-operator-scripts\") pod \"cinder-db-create-nxx54\" (UID: \"2f2d3ae1-1fd0-4058-9723-e136c74ee739\") " pod="openstack/cinder-db-create-nxx54" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.027456 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0a52-account-create-update-6f92x" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.034950 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0a52-account-create-update-6f92x"] Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.036003 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.066703 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-drvh6"] Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.126753 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq5j9\" (UniqueName: \"kubernetes.io/projected/92ff8de1-8616-43f7-8bd4-1de3f4730b5f-kube-api-access-fq5j9\") pod \"cinder-0a52-account-create-update-6f92x\" (UID: \"92ff8de1-8616-43f7-8bd4-1de3f4730b5f\") " pod="openstack/cinder-0a52-account-create-update-6f92x" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.127007 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92ff8de1-8616-43f7-8bd4-1de3f4730b5f-operator-scripts\") pod \"cinder-0a52-account-create-update-6f92x\" (UID: \"92ff8de1-8616-43f7-8bd4-1de3f4730b5f\") " pod="openstack/cinder-0a52-account-create-update-6f92x" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.127099 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f2d3ae1-1fd0-4058-9723-e136c74ee739-operator-scripts\") pod \"cinder-db-create-nxx54\" (UID: \"2f2d3ae1-1fd0-4058-9723-e136c74ee739\") " pod="openstack/cinder-db-create-nxx54" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.127191 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsx4l\" (UniqueName: \"kubernetes.io/projected/2f2d3ae1-1fd0-4058-9723-e136c74ee739-kube-api-access-hsx4l\") pod \"cinder-db-create-nxx54\" (UID: \"2f2d3ae1-1fd0-4058-9723-e136c74ee739\") " pod="openstack/cinder-db-create-nxx54" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.128212 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f2d3ae1-1fd0-4058-9723-e136c74ee739-operator-scripts\") pod \"cinder-db-create-nxx54\" (UID: \"2f2d3ae1-1fd0-4058-9723-e136c74ee739\") " pod="openstack/cinder-db-create-nxx54" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.158464 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsx4l\" (UniqueName: \"kubernetes.io/projected/2f2d3ae1-1fd0-4058-9723-e136c74ee739-kube-api-access-hsx4l\") pod \"cinder-db-create-nxx54\" (UID: \"2f2d3ae1-1fd0-4058-9723-e136c74ee739\") " pod="openstack/cinder-db-create-nxx54" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.229100 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq5j9\" (UniqueName: \"kubernetes.io/projected/92ff8de1-8616-43f7-8bd4-1de3f4730b5f-kube-api-access-fq5j9\") pod \"cinder-0a52-account-create-update-6f92x\" (UID: \"92ff8de1-8616-43f7-8bd4-1de3f4730b5f\") " pod="openstack/cinder-0a52-account-create-update-6f92x" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.229187 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92ff8de1-8616-43f7-8bd4-1de3f4730b5f-operator-scripts\") pod \"cinder-0a52-account-create-update-6f92x\" (UID: \"92ff8de1-8616-43f7-8bd4-1de3f4730b5f\") " pod="openstack/cinder-0a52-account-create-update-6f92x" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.229896 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92ff8de1-8616-43f7-8bd4-1de3f4730b5f-operator-scripts\") pod \"cinder-0a52-account-create-update-6f92x\" (UID: \"92ff8de1-8616-43f7-8bd4-1de3f4730b5f\") " pod="openstack/cinder-0a52-account-create-update-6f92x" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.248068 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq5j9\" (UniqueName: \"kubernetes.io/projected/92ff8de1-8616-43f7-8bd4-1de3f4730b5f-kube-api-access-fq5j9\") pod \"cinder-0a52-account-create-update-6f92x\" (UID: \"92ff8de1-8616-43f7-8bd4-1de3f4730b5f\") " pod="openstack/cinder-0a52-account-create-update-6f92x" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.285660 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-fc4kg"] Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.287911 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-fc4kg" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.293841 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nxx54" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.316901 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-fc4kg"] Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.331512 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htpf9\" (UniqueName: \"kubernetes.io/projected/9371580e-96d9-4d4e-96ef-d049476af5eb-kube-api-access-htpf9\") pod \"heat-db-create-fc4kg\" (UID: \"9371580e-96d9-4d4e-96ef-d049476af5eb\") " pod="openstack/heat-db-create-fc4kg" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.331615 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9371580e-96d9-4d4e-96ef-d049476af5eb-operator-scripts\") pod \"heat-db-create-fc4kg\" (UID: \"9371580e-96d9-4d4e-96ef-d049476af5eb\") " pod="openstack/heat-db-create-fc4kg" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.360814 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0a52-account-create-update-6f92x" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.373418 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-md2sc"] Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.374849 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-md2sc" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.378104 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.378377 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.378725 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.378953 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9xp5g" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.423167 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-gpffz"] Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.424779 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gpffz" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.439244 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-md2sc"] Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.445080 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htpf9\" (UniqueName: \"kubernetes.io/projected/9371580e-96d9-4d4e-96ef-d049476af5eb-kube-api-access-htpf9\") pod \"heat-db-create-fc4kg\" (UID: \"9371580e-96d9-4d4e-96ef-d049476af5eb\") " pod="openstack/heat-db-create-fc4kg" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.445164 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0f26834-34b1-4d79-a777-d080ed1eb981-operator-scripts\") pod \"neutron-db-create-gpffz\" (UID: \"d0f26834-34b1-4d79-a777-d080ed1eb981\") " pod="openstack/neutron-db-create-gpffz" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.445220 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tt8h\" (UniqueName: \"kubernetes.io/projected/cb7abcfa-6215-4393-ab01-7710ddd3055d-kube-api-access-2tt8h\") pod \"keystone-db-sync-md2sc\" (UID: \"cb7abcfa-6215-4393-ab01-7710ddd3055d\") " pod="openstack/keystone-db-sync-md2sc" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.445410 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shvvl\" (UniqueName: \"kubernetes.io/projected/d0f26834-34b1-4d79-a777-d080ed1eb981-kube-api-access-shvvl\") pod \"neutron-db-create-gpffz\" (UID: \"d0f26834-34b1-4d79-a777-d080ed1eb981\") " pod="openstack/neutron-db-create-gpffz" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.445460 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9371580e-96d9-4d4e-96ef-d049476af5eb-operator-scripts\") pod \"heat-db-create-fc4kg\" (UID: \"9371580e-96d9-4d4e-96ef-d049476af5eb\") " pod="openstack/heat-db-create-fc4kg" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.445624 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7abcfa-6215-4393-ab01-7710ddd3055d-combined-ca-bundle\") pod \"keystone-db-sync-md2sc\" (UID: \"cb7abcfa-6215-4393-ab01-7710ddd3055d\") " pod="openstack/keystone-db-sync-md2sc" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.445747 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb7abcfa-6215-4393-ab01-7710ddd3055d-config-data\") pod \"keystone-db-sync-md2sc\" (UID: \"cb7abcfa-6215-4393-ab01-7710ddd3055d\") " pod="openstack/keystone-db-sync-md2sc" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.447952 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9371580e-96d9-4d4e-96ef-d049476af5eb-operator-scripts\") pod \"heat-db-create-fc4kg\" (UID: \"9371580e-96d9-4d4e-96ef-d049476af5eb\") " pod="openstack/heat-db-create-fc4kg" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.477620 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htpf9\" (UniqueName: \"kubernetes.io/projected/9371580e-96d9-4d4e-96ef-d049476af5eb-kube-api-access-htpf9\") pod \"heat-db-create-fc4kg\" (UID: \"9371580e-96d9-4d4e-96ef-d049476af5eb\") " pod="openstack/heat-db-create-fc4kg" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.489166 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0825-account-create-update-sdz9j"] Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.502811 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0825-account-create-update-sdz9j" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.510381 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.542954 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-fc4kg" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.548943 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shvvl\" (UniqueName: \"kubernetes.io/projected/d0f26834-34b1-4d79-a777-d080ed1eb981-kube-api-access-shvvl\") pod \"neutron-db-create-gpffz\" (UID: \"d0f26834-34b1-4d79-a777-d080ed1eb981\") " pod="openstack/neutron-db-create-gpffz" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.549193 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8579a0a1-a803-4422-a697-02b34eb25fae-operator-scripts\") pod \"neutron-0825-account-create-update-sdz9j\" (UID: \"8579a0a1-a803-4422-a697-02b34eb25fae\") " pod="openstack/neutron-0825-account-create-update-sdz9j" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.550671 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7abcfa-6215-4393-ab01-7710ddd3055d-combined-ca-bundle\") pod \"keystone-db-sync-md2sc\" (UID: \"cb7abcfa-6215-4393-ab01-7710ddd3055d\") " pod="openstack/keystone-db-sync-md2sc" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.556050 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0825-account-create-update-sdz9j"] Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.559923 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb7abcfa-6215-4393-ab01-7710ddd3055d-config-data\") pod \"keystone-db-sync-md2sc\" (UID: \"cb7abcfa-6215-4393-ab01-7710ddd3055d\") " pod="openstack/keystone-db-sync-md2sc" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.560097 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8svlj\" (UniqueName: \"kubernetes.io/projected/8579a0a1-a803-4422-a697-02b34eb25fae-kube-api-access-8svlj\") pod \"neutron-0825-account-create-update-sdz9j\" (UID: \"8579a0a1-a803-4422-a697-02b34eb25fae\") " pod="openstack/neutron-0825-account-create-update-sdz9j" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.560239 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0f26834-34b1-4d79-a777-d080ed1eb981-operator-scripts\") pod \"neutron-db-create-gpffz\" (UID: \"d0f26834-34b1-4d79-a777-d080ed1eb981\") " pod="openstack/neutron-db-create-gpffz" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.560310 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tt8h\" (UniqueName: \"kubernetes.io/projected/cb7abcfa-6215-4393-ab01-7710ddd3055d-kube-api-access-2tt8h\") pod \"keystone-db-sync-md2sc\" (UID: \"cb7abcfa-6215-4393-ab01-7710ddd3055d\") " pod="openstack/keystone-db-sync-md2sc" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.563076 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0f26834-34b1-4d79-a777-d080ed1eb981-operator-scripts\") pod \"neutron-db-create-gpffz\" (UID: \"d0f26834-34b1-4d79-a777-d080ed1eb981\") " pod="openstack/neutron-db-create-gpffz" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.570379 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-gpffz"] Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.572127 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7abcfa-6215-4393-ab01-7710ddd3055d-combined-ca-bundle\") pod \"keystone-db-sync-md2sc\" (UID: \"cb7abcfa-6215-4393-ab01-7710ddd3055d\") " pod="openstack/keystone-db-sync-md2sc" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.591789 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb7abcfa-6215-4393-ab01-7710ddd3055d-config-data\") pod \"keystone-db-sync-md2sc\" (UID: \"cb7abcfa-6215-4393-ab01-7710ddd3055d\") " pod="openstack/keystone-db-sync-md2sc" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.606604 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-734f-account-create-update-7mp6n"] Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.608913 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-734f-account-create-update-7mp6n" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.609936 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tt8h\" (UniqueName: \"kubernetes.io/projected/cb7abcfa-6215-4393-ab01-7710ddd3055d-kube-api-access-2tt8h\") pod \"keystone-db-sync-md2sc\" (UID: \"cb7abcfa-6215-4393-ab01-7710ddd3055d\") " pod="openstack/keystone-db-sync-md2sc" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.611428 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.616041 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-kwbtp"] Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.619051 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-kwbtp" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.624679 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shvvl\" (UniqueName: \"kubernetes.io/projected/d0f26834-34b1-4d79-a777-d080ed1eb981-kube-api-access-shvvl\") pod \"neutron-db-create-gpffz\" (UID: \"d0f26834-34b1-4d79-a777-d080ed1eb981\") " pod="openstack/neutron-db-create-gpffz" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.645495 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-734f-account-create-update-7mp6n"] Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.666887 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg77b\" (UniqueName: \"kubernetes.io/projected/c0ae148a-761b-42e5-b88f-909c600a34fe-kube-api-access-xg77b\") pod \"barbican-db-create-kwbtp\" (UID: \"c0ae148a-761b-42e5-b88f-909c600a34fe\") " pod="openstack/barbican-db-create-kwbtp" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.666949 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8svlj\" (UniqueName: \"kubernetes.io/projected/8579a0a1-a803-4422-a697-02b34eb25fae-kube-api-access-8svlj\") pod \"neutron-0825-account-create-update-sdz9j\" (UID: \"8579a0a1-a803-4422-a697-02b34eb25fae\") " pod="openstack/neutron-0825-account-create-update-sdz9j" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.666997 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0ae148a-761b-42e5-b88f-909c600a34fe-operator-scripts\") pod \"barbican-db-create-kwbtp\" (UID: \"c0ae148a-761b-42e5-b88f-909c600a34fe\") " pod="openstack/barbican-db-create-kwbtp" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.667089 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms2s8\" (UniqueName: \"kubernetes.io/projected/f36c30f6-251f-4311-b1cd-0094934a275a-kube-api-access-ms2s8\") pod \"barbican-734f-account-create-update-7mp6n\" (UID: \"f36c30f6-251f-4311-b1cd-0094934a275a\") " pod="openstack/barbican-734f-account-create-update-7mp6n" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.667151 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8579a0a1-a803-4422-a697-02b34eb25fae-operator-scripts\") pod \"neutron-0825-account-create-update-sdz9j\" (UID: \"8579a0a1-a803-4422-a697-02b34eb25fae\") " pod="openstack/neutron-0825-account-create-update-sdz9j" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.667282 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f36c30f6-251f-4311-b1cd-0094934a275a-operator-scripts\") pod \"barbican-734f-account-create-update-7mp6n\" (UID: \"f36c30f6-251f-4311-b1cd-0094934a275a\") " pod="openstack/barbican-734f-account-create-update-7mp6n" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.668672 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8579a0a1-a803-4422-a697-02b34eb25fae-operator-scripts\") pod \"neutron-0825-account-create-update-sdz9j\" (UID: \"8579a0a1-a803-4422-a697-02b34eb25fae\") " pod="openstack/neutron-0825-account-create-update-sdz9j" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.674698 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-kwbtp"] Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.685406 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-8ecf-account-create-update-frq7s"] Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.687039 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8ecf-account-create-update-frq7s" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.689191 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8svlj\" (UniqueName: \"kubernetes.io/projected/8579a0a1-a803-4422-a697-02b34eb25fae-kube-api-access-8svlj\") pod \"neutron-0825-account-create-update-sdz9j\" (UID: \"8579a0a1-a803-4422-a697-02b34eb25fae\") " pod="openstack/neutron-0825-account-create-update-sdz9j" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.691504 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.711845 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-8ecf-account-create-update-frq7s"] Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.726998 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gpffz" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.756995 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0825-account-create-update-sdz9j" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.758984 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-md2sc" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.771089 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjzdz\" (UniqueName: \"kubernetes.io/projected/4f60beb5-6256-43ae-9df7-c57bb4f2d27e-kube-api-access-jjzdz\") pod \"heat-8ecf-account-create-update-frq7s\" (UID: \"4f60beb5-6256-43ae-9df7-c57bb4f2d27e\") " pod="openstack/heat-8ecf-account-create-update-frq7s" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.771210 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f36c30f6-251f-4311-b1cd-0094934a275a-operator-scripts\") pod \"barbican-734f-account-create-update-7mp6n\" (UID: \"f36c30f6-251f-4311-b1cd-0094934a275a\") " pod="openstack/barbican-734f-account-create-update-7mp6n" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.771357 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg77b\" (UniqueName: \"kubernetes.io/projected/c0ae148a-761b-42e5-b88f-909c600a34fe-kube-api-access-xg77b\") pod \"barbican-db-create-kwbtp\" (UID: \"c0ae148a-761b-42e5-b88f-909c600a34fe\") " pod="openstack/barbican-db-create-kwbtp" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.771403 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f60beb5-6256-43ae-9df7-c57bb4f2d27e-operator-scripts\") pod \"heat-8ecf-account-create-update-frq7s\" (UID: \"4f60beb5-6256-43ae-9df7-c57bb4f2d27e\") " pod="openstack/heat-8ecf-account-create-update-frq7s" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.771465 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0ae148a-761b-42e5-b88f-909c600a34fe-operator-scripts\") pod \"barbican-db-create-kwbtp\" (UID: \"c0ae148a-761b-42e5-b88f-909c600a34fe\") " pod="openstack/barbican-db-create-kwbtp" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.771590 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms2s8\" (UniqueName: \"kubernetes.io/projected/f36c30f6-251f-4311-b1cd-0094934a275a-kube-api-access-ms2s8\") pod \"barbican-734f-account-create-update-7mp6n\" (UID: \"f36c30f6-251f-4311-b1cd-0094934a275a\") " pod="openstack/barbican-734f-account-create-update-7mp6n" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.772456 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f36c30f6-251f-4311-b1cd-0094934a275a-operator-scripts\") pod \"barbican-734f-account-create-update-7mp6n\" (UID: \"f36c30f6-251f-4311-b1cd-0094934a275a\") " pod="openstack/barbican-734f-account-create-update-7mp6n" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.774383 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0ae148a-761b-42e5-b88f-909c600a34fe-operator-scripts\") pod \"barbican-db-create-kwbtp\" (UID: \"c0ae148a-761b-42e5-b88f-909c600a34fe\") " pod="openstack/barbican-db-create-kwbtp" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.804916 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms2s8\" (UniqueName: \"kubernetes.io/projected/f36c30f6-251f-4311-b1cd-0094934a275a-kube-api-access-ms2s8\") pod \"barbican-734f-account-create-update-7mp6n\" (UID: \"f36c30f6-251f-4311-b1cd-0094934a275a\") " pod="openstack/barbican-734f-account-create-update-7mp6n" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.811974 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg77b\" (UniqueName: \"kubernetes.io/projected/c0ae148a-761b-42e5-b88f-909c600a34fe-kube-api-access-xg77b\") pod \"barbican-db-create-kwbtp\" (UID: \"c0ae148a-761b-42e5-b88f-909c600a34fe\") " pod="openstack/barbican-db-create-kwbtp" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.830141 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-drvh6" event={"ID":"0545ccf6-794f-4a89-912a-0e07df8534f6","Type":"ContainerStarted","Data":"15bc37e8526176158fc8e414dfce95a31933555384df2cfd235965896ff0ac83"} Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.830193 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-drvh6" event={"ID":"0545ccf6-794f-4a89-912a-0e07df8534f6","Type":"ContainerStarted","Data":"0b80d3a2d30994fd7ad361c807b8f1ad8fcbeb67c3fd917a4639805371aad4d1"} Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.875261 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjzdz\" (UniqueName: \"kubernetes.io/projected/4f60beb5-6256-43ae-9df7-c57bb4f2d27e-kube-api-access-jjzdz\") pod \"heat-8ecf-account-create-update-frq7s\" (UID: \"4f60beb5-6256-43ae-9df7-c57bb4f2d27e\") " pod="openstack/heat-8ecf-account-create-update-frq7s" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.875802 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f60beb5-6256-43ae-9df7-c57bb4f2d27e-operator-scripts\") pod \"heat-8ecf-account-create-update-frq7s\" (UID: \"4f60beb5-6256-43ae-9df7-c57bb4f2d27e\") " pod="openstack/heat-8ecf-account-create-update-frq7s" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.878726 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f60beb5-6256-43ae-9df7-c57bb4f2d27e-operator-scripts\") pod \"heat-8ecf-account-create-update-frq7s\" (UID: \"4f60beb5-6256-43ae-9df7-c57bb4f2d27e\") " pod="openstack/heat-8ecf-account-create-update-frq7s" Dec 12 16:11:48 crc kubenswrapper[4693]: I1212 16:11:48.908882 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjzdz\" (UniqueName: \"kubernetes.io/projected/4f60beb5-6256-43ae-9df7-c57bb4f2d27e-kube-api-access-jjzdz\") pod \"heat-8ecf-account-create-update-frq7s\" (UID: \"4f60beb5-6256-43ae-9df7-c57bb4f2d27e\") " pod="openstack/heat-8ecf-account-create-update-frq7s" Dec 12 16:11:49 crc kubenswrapper[4693]: I1212 16:11:49.069795 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-734f-account-create-update-7mp6n" Dec 12 16:11:49 crc kubenswrapper[4693]: I1212 16:11:49.080620 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-kwbtp" Dec 12 16:11:49 crc kubenswrapper[4693]: I1212 16:11:49.094358 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0a52-account-create-update-6f92x"] Dec 12 16:11:49 crc kubenswrapper[4693]: I1212 16:11:49.113213 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8ecf-account-create-update-frq7s" Dec 12 16:11:49 crc kubenswrapper[4693]: W1212 16:11:49.195284 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92ff8de1_8616_43f7_8bd4_1de3f4730b5f.slice/crio-ce53b5b095b3fafa98fe4572f4685c5f43c315660eac0a514d0837d4a46a4912 WatchSource:0}: Error finding container ce53b5b095b3fafa98fe4572f4685c5f43c315660eac0a514d0837d4a46a4912: Status 404 returned error can't find the container with id ce53b5b095b3fafa98fe4572f4685c5f43c315660eac0a514d0837d4a46a4912 Dec 12 16:11:49 crc kubenswrapper[4693]: I1212 16:11:49.233044 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-nxx54"] Dec 12 16:11:49 crc kubenswrapper[4693]: I1212 16:11:49.272688 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-fc4kg"] Dec 12 16:11:49 crc kubenswrapper[4693]: W1212 16:11:49.296226 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9371580e_96d9_4d4e_96ef_d049476af5eb.slice/crio-fa0c23f2da6368827a257d261404d244e8f8afbcb4f5d3ef21127b5f0f6e26a0 WatchSource:0}: Error finding container fa0c23f2da6368827a257d261404d244e8f8afbcb4f5d3ef21127b5f0f6e26a0: Status 404 returned error can't find the container with id fa0c23f2da6368827a257d261404d244e8f8afbcb4f5d3ef21127b5f0f6e26a0 Dec 12 16:11:49 crc kubenswrapper[4693]: I1212 16:11:49.464846 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-gpffz"] Dec 12 16:11:49 crc kubenswrapper[4693]: I1212 16:11:49.629146 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0825-account-create-update-sdz9j"] Dec 12 16:11:49 crc kubenswrapper[4693]: I1212 16:11:49.707864 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-md2sc"] Dec 12 16:11:49 crc kubenswrapper[4693]: I1212 16:11:49.844465 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nxx54" event={"ID":"2f2d3ae1-1fd0-4058-9723-e136c74ee739","Type":"ContainerStarted","Data":"c336f6ee9e6625cd65e2e941ecc5893916889d0cdd31c58ae565af179194a45b"} Dec 12 16:11:49 crc kubenswrapper[4693]: I1212 16:11:49.848283 4693 generic.go:334] "Generic (PLEG): container finished" podID="0545ccf6-794f-4a89-912a-0e07df8534f6" containerID="15bc37e8526176158fc8e414dfce95a31933555384df2cfd235965896ff0ac83" exitCode=0 Dec 12 16:11:49 crc kubenswrapper[4693]: I1212 16:11:49.848356 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-drvh6" event={"ID":"0545ccf6-794f-4a89-912a-0e07df8534f6","Type":"ContainerDied","Data":"15bc37e8526176158fc8e414dfce95a31933555384df2cfd235965896ff0ac83"} Dec 12 16:11:49 crc kubenswrapper[4693]: I1212 16:11:49.852046 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-md2sc" event={"ID":"cb7abcfa-6215-4393-ab01-7710ddd3055d","Type":"ContainerStarted","Data":"b22383e9497edfc6ff406bb2152a1a31f47e5b17e65935e304d01e6d51b30e58"} Dec 12 16:11:49 crc kubenswrapper[4693]: I1212 16:11:49.855031 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-fc4kg" event={"ID":"9371580e-96d9-4d4e-96ef-d049476af5eb","Type":"ContainerStarted","Data":"fa0c23f2da6368827a257d261404d244e8f8afbcb4f5d3ef21127b5f0f6e26a0"} Dec 12 16:11:49 crc kubenswrapper[4693]: I1212 16:11:49.856662 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gpffz" event={"ID":"d0f26834-34b1-4d79-a777-d080ed1eb981","Type":"ContainerStarted","Data":"a44e848f62f6bc567dd58dc738319011baa3ef8f2b6c7b3d0fd249884b9db241"} Dec 12 16:11:49 crc kubenswrapper[4693]: I1212 16:11:49.858374 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0a52-account-create-update-6f92x" event={"ID":"92ff8de1-8616-43f7-8bd4-1de3f4730b5f","Type":"ContainerStarted","Data":"ce53b5b095b3fafa98fe4572f4685c5f43c315660eac0a514d0837d4a46a4912"} Dec 12 16:11:49 crc kubenswrapper[4693]: I1212 16:11:49.863496 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0825-account-create-update-sdz9j" event={"ID":"8579a0a1-a803-4422-a697-02b34eb25fae","Type":"ContainerStarted","Data":"a85f230678db183b903bef0486ef9276c7d6816b65c965bf9029caa35ac34281"} Dec 12 16:11:49 crc kubenswrapper[4693]: I1212 16:11:49.923303 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:49 crc kubenswrapper[4693]: I1212 16:11:49.980672 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-kwbtp"] Dec 12 16:11:50 crc kubenswrapper[4693]: I1212 16:11:50.439392 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-8ecf-account-create-update-frq7s"] Dec 12 16:11:50 crc kubenswrapper[4693]: I1212 16:11:50.443231 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-734f-account-create-update-7mp6n"] Dec 12 16:11:50 crc kubenswrapper[4693]: W1212 16:11:50.508444 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f60beb5_6256_43ae_9df7_c57bb4f2d27e.slice/crio-d6a84f9b87c4c15326758f0be878988d16c501142f0743e59b95c0d8dba3686f WatchSource:0}: Error finding container d6a84f9b87c4c15326758f0be878988d16c501142f0743e59b95c0d8dba3686f: Status 404 returned error can't find the container with id d6a84f9b87c4c15326758f0be878988d16c501142f0743e59b95c0d8dba3686f Dec 12 16:11:50 crc kubenswrapper[4693]: I1212 16:11:50.872570 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8ecf-account-create-update-frq7s" event={"ID":"4f60beb5-6256-43ae-9df7-c57bb4f2d27e","Type":"ContainerStarted","Data":"d6a84f9b87c4c15326758f0be878988d16c501142f0743e59b95c0d8dba3686f"} Dec 12 16:11:50 crc kubenswrapper[4693]: I1212 16:11:50.875188 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-drvh6" event={"ID":"0545ccf6-794f-4a89-912a-0e07df8534f6","Type":"ContainerStarted","Data":"c5fbdf994d0ccfa3303aad6ff34a4ab6fc3442a4c6d81b9c39abd9ee873b0af4"} Dec 12 16:11:50 crc kubenswrapper[4693]: I1212 16:11:50.875399 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:11:50 crc kubenswrapper[4693]: I1212 16:11:50.876933 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-kwbtp" event={"ID":"c0ae148a-761b-42e5-b88f-909c600a34fe","Type":"ContainerStarted","Data":"6190cd6e7a5d693d3be9f8e486f66805061313fefcc5db27911d0f2cbdbd6bbd"} Dec 12 16:11:50 crc kubenswrapper[4693]: I1212 16:11:50.876962 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-kwbtp" event={"ID":"c0ae148a-761b-42e5-b88f-909c600a34fe","Type":"ContainerStarted","Data":"97829e3d5bbcda4b08eb17c8154e4bd0e74b166d911d16fca9e3c4bb65e1a382"} Dec 12 16:11:50 crc kubenswrapper[4693]: I1212 16:11:50.888567 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-734f-account-create-update-7mp6n" event={"ID":"f36c30f6-251f-4311-b1cd-0094934a275a","Type":"ContainerStarted","Data":"e461205d9fb5a523ed11f6dd22349e0eb8bc1a82e17b16487206239485970be2"} Dec 12 16:11:50 crc kubenswrapper[4693]: I1212 16:11:50.894877 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-drvh6" podStartSLOduration=3.894856794 podStartE2EDuration="3.894856794s" podCreationTimestamp="2025-12-12 16:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:11:50.89359922 +0000 UTC m=+1538.062238841" watchObservedRunningTime="2025-12-12 16:11:50.894856794 +0000 UTC m=+1538.063496405" Dec 12 16:11:50 crc kubenswrapper[4693]: I1212 16:11:50.916776 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-kwbtp" podStartSLOduration=2.916758873 podStartE2EDuration="2.916758873s" podCreationTimestamp="2025-12-12 16:11:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:11:50.907303259 +0000 UTC m=+1538.075942880" watchObservedRunningTime="2025-12-12 16:11:50.916758873 +0000 UTC m=+1538.085398474" Dec 12 16:11:51 crc kubenswrapper[4693]: I1212 16:11:51.910762 4693 generic.go:334] "Generic (PLEG): container finished" podID="9371580e-96d9-4d4e-96ef-d049476af5eb" containerID="4e78a73f50845b9bfe7d7d087a29214f988f35e70bcab7a38fb2efa15f2a851d" exitCode=0 Dec 12 16:11:51 crc kubenswrapper[4693]: I1212 16:11:51.911504 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-fc4kg" event={"ID":"9371580e-96d9-4d4e-96ef-d049476af5eb","Type":"ContainerDied","Data":"4e78a73f50845b9bfe7d7d087a29214f988f35e70bcab7a38fb2efa15f2a851d"} Dec 12 16:11:51 crc kubenswrapper[4693]: I1212 16:11:51.914226 4693 generic.go:334] "Generic (PLEG): container finished" podID="d0f26834-34b1-4d79-a777-d080ed1eb981" containerID="5eef873a8017a549f187bc0ed2b233785fd9d86af8e0baa9a36edb40782e14c7" exitCode=0 Dec 12 16:11:51 crc kubenswrapper[4693]: I1212 16:11:51.914303 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gpffz" event={"ID":"d0f26834-34b1-4d79-a777-d080ed1eb981","Type":"ContainerDied","Data":"5eef873a8017a549f187bc0ed2b233785fd9d86af8e0baa9a36edb40782e14c7"} Dec 12 16:11:51 crc kubenswrapper[4693]: I1212 16:11:51.916923 4693 generic.go:334] "Generic (PLEG): container finished" podID="c0ae148a-761b-42e5-b88f-909c600a34fe" containerID="6190cd6e7a5d693d3be9f8e486f66805061313fefcc5db27911d0f2cbdbd6bbd" exitCode=0 Dec 12 16:11:51 crc kubenswrapper[4693]: I1212 16:11:51.917018 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-kwbtp" event={"ID":"c0ae148a-761b-42e5-b88f-909c600a34fe","Type":"ContainerDied","Data":"6190cd6e7a5d693d3be9f8e486f66805061313fefcc5db27911d0f2cbdbd6bbd"} Dec 12 16:11:51 crc kubenswrapper[4693]: I1212 16:11:51.919873 4693 generic.go:334] "Generic (PLEG): container finished" podID="f36c30f6-251f-4311-b1cd-0094934a275a" containerID="f44acde98f21f3b2b2c50cdbc28f448522796792d9521a3168a0ae98ab025817" exitCode=0 Dec 12 16:11:51 crc kubenswrapper[4693]: I1212 16:11:51.919907 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-734f-account-create-update-7mp6n" event={"ID":"f36c30f6-251f-4311-b1cd-0094934a275a","Type":"ContainerDied","Data":"f44acde98f21f3b2b2c50cdbc28f448522796792d9521a3168a0ae98ab025817"} Dec 12 16:11:51 crc kubenswrapper[4693]: I1212 16:11:51.935126 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0a52-account-create-update-6f92x" event={"ID":"92ff8de1-8616-43f7-8bd4-1de3f4730b5f","Type":"ContainerStarted","Data":"17fa181b0cc0318095aad6969966cc6e3cae07471badac880e252e96913a8167"} Dec 12 16:11:51 crc kubenswrapper[4693]: I1212 16:11:51.943065 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0825-account-create-update-sdz9j" event={"ID":"8579a0a1-a803-4422-a697-02b34eb25fae","Type":"ContainerStarted","Data":"9386fbc3721c2552e9b2cc95c711d33dcf7c2e5f852515283d0397532b043493"} Dec 12 16:11:51 crc kubenswrapper[4693]: I1212 16:11:51.946052 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8ecf-account-create-update-frq7s" event={"ID":"4f60beb5-6256-43ae-9df7-c57bb4f2d27e","Type":"ContainerStarted","Data":"24ee73087c6851104146242608495e6d664845962234a77ad7bec9ce8b9dd35a"} Dec 12 16:11:51 crc kubenswrapper[4693]: I1212 16:11:51.955990 4693 generic.go:334] "Generic (PLEG): container finished" podID="2f2d3ae1-1fd0-4058-9723-e136c74ee739" containerID="4fecf5693d85a30d15126bf103858e0db32eeb536d5a364d5811de86d997818f" exitCode=0 Dec 12 16:11:51 crc kubenswrapper[4693]: I1212 16:11:51.956306 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nxx54" event={"ID":"2f2d3ae1-1fd0-4058-9723-e136c74ee739","Type":"ContainerDied","Data":"4fecf5693d85a30d15126bf103858e0db32eeb536d5a364d5811de86d997818f"} Dec 12 16:11:52 crc kubenswrapper[4693]: I1212 16:11:52.017803 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-8ecf-account-create-update-frq7s" podStartSLOduration=4.01777854 podStartE2EDuration="4.01777854s" podCreationTimestamp="2025-12-12 16:11:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:11:51.998120881 +0000 UTC m=+1539.166760482" watchObservedRunningTime="2025-12-12 16:11:52.01777854 +0000 UTC m=+1539.186418141" Dec 12 16:11:52 crc kubenswrapper[4693]: I1212 16:11:52.043688 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-0825-account-create-update-sdz9j" podStartSLOduration=4.043670296 podStartE2EDuration="4.043670296s" podCreationTimestamp="2025-12-12 16:11:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:11:52.014995725 +0000 UTC m=+1539.183635326" watchObservedRunningTime="2025-12-12 16:11:52.043670296 +0000 UTC m=+1539.212309897" Dec 12 16:11:52 crc kubenswrapper[4693]: I1212 16:11:52.054752 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-0a52-account-create-update-6f92x" podStartSLOduration=5.054737764 podStartE2EDuration="5.054737764s" podCreationTimestamp="2025-12-12 16:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:11:52.027103271 +0000 UTC m=+1539.195742872" watchObservedRunningTime="2025-12-12 16:11:52.054737764 +0000 UTC m=+1539.223377365" Dec 12 16:11:52 crc kubenswrapper[4693]: I1212 16:11:52.972180 4693 generic.go:334] "Generic (PLEG): container finished" podID="4f60beb5-6256-43ae-9df7-c57bb4f2d27e" containerID="24ee73087c6851104146242608495e6d664845962234a77ad7bec9ce8b9dd35a" exitCode=0 Dec 12 16:11:52 crc kubenswrapper[4693]: I1212 16:11:52.972395 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8ecf-account-create-update-frq7s" event={"ID":"4f60beb5-6256-43ae-9df7-c57bb4f2d27e","Type":"ContainerDied","Data":"24ee73087c6851104146242608495e6d664845962234a77ad7bec9ce8b9dd35a"} Dec 12 16:11:52 crc kubenswrapper[4693]: I1212 16:11:52.975516 4693 generic.go:334] "Generic (PLEG): container finished" podID="92ff8de1-8616-43f7-8bd4-1de3f4730b5f" containerID="17fa181b0cc0318095aad6969966cc6e3cae07471badac880e252e96913a8167" exitCode=0 Dec 12 16:11:52 crc kubenswrapper[4693]: I1212 16:11:52.975585 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0a52-account-create-update-6f92x" event={"ID":"92ff8de1-8616-43f7-8bd4-1de3f4730b5f","Type":"ContainerDied","Data":"17fa181b0cc0318095aad6969966cc6e3cae07471badac880e252e96913a8167"} Dec 12 16:11:52 crc kubenswrapper[4693]: I1212 16:11:52.978071 4693 generic.go:334] "Generic (PLEG): container finished" podID="8579a0a1-a803-4422-a697-02b34eb25fae" containerID="9386fbc3721c2552e9b2cc95c711d33dcf7c2e5f852515283d0397532b043493" exitCode=0 Dec 12 16:11:52 crc kubenswrapper[4693]: I1212 16:11:52.978312 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0825-account-create-update-sdz9j" event={"ID":"8579a0a1-a803-4422-a697-02b34eb25fae","Type":"ContainerDied","Data":"9386fbc3721c2552e9b2cc95c711d33dcf7c2e5f852515283d0397532b043493"} Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.449822 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.464290 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nxx54" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.485140 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0a52-account-create-update-6f92x" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.542503 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-kwbtp" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.563714 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0825-account-create-update-sdz9j" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.607334 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-734f-account-create-update-7mp6n" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.609215 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8ecf-account-create-update-frq7s" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.647822 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8579a0a1-a803-4422-a697-02b34eb25fae-operator-scripts\") pod \"8579a0a1-a803-4422-a697-02b34eb25fae\" (UID: \"8579a0a1-a803-4422-a697-02b34eb25fae\") " Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.648086 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq5j9\" (UniqueName: \"kubernetes.io/projected/92ff8de1-8616-43f7-8bd4-1de3f4730b5f-kube-api-access-fq5j9\") pod \"92ff8de1-8616-43f7-8bd4-1de3f4730b5f\" (UID: \"92ff8de1-8616-43f7-8bd4-1de3f4730b5f\") " Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.648186 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f36c30f6-251f-4311-b1cd-0094934a275a-operator-scripts\") pod \"f36c30f6-251f-4311-b1cd-0094934a275a\" (UID: \"f36c30f6-251f-4311-b1cd-0094934a275a\") " Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.648315 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0ae148a-761b-42e5-b88f-909c600a34fe-operator-scripts\") pod \"c0ae148a-761b-42e5-b88f-909c600a34fe\" (UID: \"c0ae148a-761b-42e5-b88f-909c600a34fe\") " Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.648407 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f60beb5-6256-43ae-9df7-c57bb4f2d27e-operator-scripts\") pod \"4f60beb5-6256-43ae-9df7-c57bb4f2d27e\" (UID: \"4f60beb5-6256-43ae-9df7-c57bb4f2d27e\") " Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.648487 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8svlj\" (UniqueName: \"kubernetes.io/projected/8579a0a1-a803-4422-a697-02b34eb25fae-kube-api-access-8svlj\") pod \"8579a0a1-a803-4422-a697-02b34eb25fae\" (UID: \"8579a0a1-a803-4422-a697-02b34eb25fae\") " Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.648555 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92ff8de1-8616-43f7-8bd4-1de3f4730b5f-operator-scripts\") pod \"92ff8de1-8616-43f7-8bd4-1de3f4730b5f\" (UID: \"92ff8de1-8616-43f7-8bd4-1de3f4730b5f\") " Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.648642 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms2s8\" (UniqueName: \"kubernetes.io/projected/f36c30f6-251f-4311-b1cd-0094934a275a-kube-api-access-ms2s8\") pod \"f36c30f6-251f-4311-b1cd-0094934a275a\" (UID: \"f36c30f6-251f-4311-b1cd-0094934a275a\") " Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.648727 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjzdz\" (UniqueName: \"kubernetes.io/projected/4f60beb5-6256-43ae-9df7-c57bb4f2d27e-kube-api-access-jjzdz\") pod \"4f60beb5-6256-43ae-9df7-c57bb4f2d27e\" (UID: \"4f60beb5-6256-43ae-9df7-c57bb4f2d27e\") " Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.648811 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg77b\" (UniqueName: \"kubernetes.io/projected/c0ae148a-761b-42e5-b88f-909c600a34fe-kube-api-access-xg77b\") pod \"c0ae148a-761b-42e5-b88f-909c600a34fe\" (UID: \"c0ae148a-761b-42e5-b88f-909c600a34fe\") " Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.650675 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsx4l\" (UniqueName: \"kubernetes.io/projected/2f2d3ae1-1fd0-4058-9723-e136c74ee739-kube-api-access-hsx4l\") pod \"2f2d3ae1-1fd0-4058-9723-e136c74ee739\" (UID: \"2f2d3ae1-1fd0-4058-9723-e136c74ee739\") " Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.650827 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f2d3ae1-1fd0-4058-9723-e136c74ee739-operator-scripts\") pod \"2f2d3ae1-1fd0-4058-9723-e136c74ee739\" (UID: \"2f2d3ae1-1fd0-4058-9723-e136c74ee739\") " Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.652110 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f2d3ae1-1fd0-4058-9723-e136c74ee739-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f2d3ae1-1fd0-4058-9723-e136c74ee739" (UID: "2f2d3ae1-1fd0-4058-9723-e136c74ee739"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.654593 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-fc4kg" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.654967 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0ae148a-761b-42e5-b88f-909c600a34fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c0ae148a-761b-42e5-b88f-909c600a34fe" (UID: "c0ae148a-761b-42e5-b88f-909c600a34fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.655413 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f36c30f6-251f-4311-b1cd-0094934a275a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f36c30f6-251f-4311-b1cd-0094934a275a" (UID: "f36c30f6-251f-4311-b1cd-0094934a275a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.657306 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f60beb5-6256-43ae-9df7-c57bb4f2d27e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4f60beb5-6256-43ae-9df7-c57bb4f2d27e" (UID: "4f60beb5-6256-43ae-9df7-c57bb4f2d27e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.657540 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8579a0a1-a803-4422-a697-02b34eb25fae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8579a0a1-a803-4422-a697-02b34eb25fae" (UID: "8579a0a1-a803-4422-a697-02b34eb25fae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.658096 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92ff8de1-8616-43f7-8bd4-1de3f4730b5f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92ff8de1-8616-43f7-8bd4-1de3f4730b5f" (UID: "92ff8de1-8616-43f7-8bd4-1de3f4730b5f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.658197 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7z76l"] Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.658531 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-7z76l" podUID="52f33022-5f32-4ca6-bc92-5753d41cd038" containerName="dnsmasq-dns" containerID="cri-o://f8e2be273ad420bdb52a1c546cfb1d5a5358b756b210334bdfd4cff0c1e75dd1" gracePeriod=10 Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.659907 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8579a0a1-a803-4422-a697-02b34eb25fae-kube-api-access-8svlj" (OuterVolumeSpecName: "kube-api-access-8svlj") pod "8579a0a1-a803-4422-a697-02b34eb25fae" (UID: "8579a0a1-a803-4422-a697-02b34eb25fae"). InnerVolumeSpecName "kube-api-access-8svlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.667219 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f36c30f6-251f-4311-b1cd-0094934a275a-kube-api-access-ms2s8" (OuterVolumeSpecName: "kube-api-access-ms2s8") pod "f36c30f6-251f-4311-b1cd-0094934a275a" (UID: "f36c30f6-251f-4311-b1cd-0094934a275a"). InnerVolumeSpecName "kube-api-access-ms2s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.669305 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f60beb5-6256-43ae-9df7-c57bb4f2d27e-kube-api-access-jjzdz" (OuterVolumeSpecName: "kube-api-access-jjzdz") pod "4f60beb5-6256-43ae-9df7-c57bb4f2d27e" (UID: "4f60beb5-6256-43ae-9df7-c57bb4f2d27e"). InnerVolumeSpecName "kube-api-access-jjzdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.669455 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0ae148a-761b-42e5-b88f-909c600a34fe-kube-api-access-xg77b" (OuterVolumeSpecName: "kube-api-access-xg77b") pod "c0ae148a-761b-42e5-b88f-909c600a34fe" (UID: "c0ae148a-761b-42e5-b88f-909c600a34fe"). InnerVolumeSpecName "kube-api-access-xg77b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.670526 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92ff8de1-8616-43f7-8bd4-1de3f4730b5f-kube-api-access-fq5j9" (OuterVolumeSpecName: "kube-api-access-fq5j9") pod "92ff8de1-8616-43f7-8bd4-1de3f4730b5f" (UID: "92ff8de1-8616-43f7-8bd4-1de3f4730b5f"). InnerVolumeSpecName "kube-api-access-fq5j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.672851 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f2d3ae1-1fd0-4058-9723-e136c74ee739-kube-api-access-hsx4l" (OuterVolumeSpecName: "kube-api-access-hsx4l") pod "2f2d3ae1-1fd0-4058-9723-e136c74ee739" (UID: "2f2d3ae1-1fd0-4058-9723-e136c74ee739"). InnerVolumeSpecName "kube-api-access-hsx4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.686434 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gpffz" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.753011 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htpf9\" (UniqueName: \"kubernetes.io/projected/9371580e-96d9-4d4e-96ef-d049476af5eb-kube-api-access-htpf9\") pod \"9371580e-96d9-4d4e-96ef-d049476af5eb\" (UID: \"9371580e-96d9-4d4e-96ef-d049476af5eb\") " Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.753076 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0f26834-34b1-4d79-a777-d080ed1eb981-operator-scripts\") pod \"d0f26834-34b1-4d79-a777-d080ed1eb981\" (UID: \"d0f26834-34b1-4d79-a777-d080ed1eb981\") " Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.753153 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shvvl\" (UniqueName: \"kubernetes.io/projected/d0f26834-34b1-4d79-a777-d080ed1eb981-kube-api-access-shvvl\") pod \"d0f26834-34b1-4d79-a777-d080ed1eb981\" (UID: \"d0f26834-34b1-4d79-a777-d080ed1eb981\") " Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.753235 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9371580e-96d9-4d4e-96ef-d049476af5eb-operator-scripts\") pod \"9371580e-96d9-4d4e-96ef-d049476af5eb\" (UID: \"9371580e-96d9-4d4e-96ef-d049476af5eb\") " Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.753711 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f60beb5-6256-43ae-9df7-c57bb4f2d27e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.753728 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8svlj\" (UniqueName: \"kubernetes.io/projected/8579a0a1-a803-4422-a697-02b34eb25fae-kube-api-access-8svlj\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.753739 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92ff8de1-8616-43f7-8bd4-1de3f4730b5f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.753748 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms2s8\" (UniqueName: \"kubernetes.io/projected/f36c30f6-251f-4311-b1cd-0094934a275a-kube-api-access-ms2s8\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.753756 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjzdz\" (UniqueName: \"kubernetes.io/projected/4f60beb5-6256-43ae-9df7-c57bb4f2d27e-kube-api-access-jjzdz\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.753764 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg77b\" (UniqueName: \"kubernetes.io/projected/c0ae148a-761b-42e5-b88f-909c600a34fe-kube-api-access-xg77b\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.753773 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsx4l\" (UniqueName: \"kubernetes.io/projected/2f2d3ae1-1fd0-4058-9723-e136c74ee739-kube-api-access-hsx4l\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.753781 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f2d3ae1-1fd0-4058-9723-e136c74ee739-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.753791 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8579a0a1-a803-4422-a697-02b34eb25fae-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.753803 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq5j9\" (UniqueName: \"kubernetes.io/projected/92ff8de1-8616-43f7-8bd4-1de3f4730b5f-kube-api-access-fq5j9\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.753816 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f36c30f6-251f-4311-b1cd-0094934a275a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.753828 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0ae148a-761b-42e5-b88f-909c600a34fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.754142 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9371580e-96d9-4d4e-96ef-d049476af5eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9371580e-96d9-4d4e-96ef-d049476af5eb" (UID: "9371580e-96d9-4d4e-96ef-d049476af5eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.754233 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0f26834-34b1-4d79-a777-d080ed1eb981-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d0f26834-34b1-4d79-a777-d080ed1eb981" (UID: "d0f26834-34b1-4d79-a777-d080ed1eb981"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.767708 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0f26834-34b1-4d79-a777-d080ed1eb981-kube-api-access-shvvl" (OuterVolumeSpecName: "kube-api-access-shvvl") pod "d0f26834-34b1-4d79-a777-d080ed1eb981" (UID: "d0f26834-34b1-4d79-a777-d080ed1eb981"). InnerVolumeSpecName "kube-api-access-shvvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.783561 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9371580e-96d9-4d4e-96ef-d049476af5eb-kube-api-access-htpf9" (OuterVolumeSpecName: "kube-api-access-htpf9") pod "9371580e-96d9-4d4e-96ef-d049476af5eb" (UID: "9371580e-96d9-4d4e-96ef-d049476af5eb"). InnerVolumeSpecName "kube-api-access-htpf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.857971 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htpf9\" (UniqueName: \"kubernetes.io/projected/9371580e-96d9-4d4e-96ef-d049476af5eb-kube-api-access-htpf9\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.858024 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0f26834-34b1-4d79-a777-d080ed1eb981-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.858037 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shvvl\" (UniqueName: \"kubernetes.io/projected/d0f26834-34b1-4d79-a777-d080ed1eb981-kube-api-access-shvvl\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:57 crc kubenswrapper[4693]: I1212 16:11:57.858048 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9371580e-96d9-4d4e-96ef-d049476af5eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.065558 4693 generic.go:334] "Generic (PLEG): container finished" podID="52f33022-5f32-4ca6-bc92-5753d41cd038" containerID="f8e2be273ad420bdb52a1c546cfb1d5a5358b756b210334bdfd4cff0c1e75dd1" exitCode=0 Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.065666 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7z76l" event={"ID":"52f33022-5f32-4ca6-bc92-5753d41cd038","Type":"ContainerDied","Data":"f8e2be273ad420bdb52a1c546cfb1d5a5358b756b210334bdfd4cff0c1e75dd1"} Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.077693 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8ecf-account-create-update-frq7s" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.077987 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8ecf-account-create-update-frq7s" event={"ID":"4f60beb5-6256-43ae-9df7-c57bb4f2d27e","Type":"ContainerDied","Data":"d6a84f9b87c4c15326758f0be878988d16c501142f0743e59b95c0d8dba3686f"} Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.078030 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6a84f9b87c4c15326758f0be878988d16c501142f0743e59b95c0d8dba3686f" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.103601 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0a52-account-create-update-6f92x" event={"ID":"92ff8de1-8616-43f7-8bd4-1de3f4730b5f","Type":"ContainerDied","Data":"ce53b5b095b3fafa98fe4572f4685c5f43c315660eac0a514d0837d4a46a4912"} Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.103645 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce53b5b095b3fafa98fe4572f4685c5f43c315660eac0a514d0837d4a46a4912" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.103744 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0a52-account-create-update-6f92x" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.135597 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-kwbtp" event={"ID":"c0ae148a-761b-42e5-b88f-909c600a34fe","Type":"ContainerDied","Data":"97829e3d5bbcda4b08eb17c8154e4bd0e74b166d911d16fca9e3c4bb65e1a382"} Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.135635 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97829e3d5bbcda4b08eb17c8154e4bd0e74b166d911d16fca9e3c4bb65e1a382" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.135702 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-kwbtp" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.173875 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-734f-account-create-update-7mp6n" event={"ID":"f36c30f6-251f-4311-b1cd-0094934a275a","Type":"ContainerDied","Data":"e461205d9fb5a523ed11f6dd22349e0eb8bc1a82e17b16487206239485970be2"} Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.174067 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e461205d9fb5a523ed11f6dd22349e0eb8bc1a82e17b16487206239485970be2" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.173919 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-734f-account-create-update-7mp6n" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.195181 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0825-account-create-update-sdz9j" event={"ID":"8579a0a1-a803-4422-a697-02b34eb25fae","Type":"ContainerDied","Data":"a85f230678db183b903bef0486ef9276c7d6816b65c965bf9029caa35ac34281"} Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.195556 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a85f230678db183b903bef0486ef9276c7d6816b65c965bf9029caa35ac34281" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.195658 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0825-account-create-update-sdz9j" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.200226 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nxx54" event={"ID":"2f2d3ae1-1fd0-4058-9723-e136c74ee739","Type":"ContainerDied","Data":"c336f6ee9e6625cd65e2e941ecc5893916889d0cdd31c58ae565af179194a45b"} Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.200382 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c336f6ee9e6625cd65e2e941ecc5893916889d0cdd31c58ae565af179194a45b" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.200298 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nxx54" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.213497 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-md2sc" event={"ID":"cb7abcfa-6215-4393-ab01-7710ddd3055d","Type":"ContainerStarted","Data":"22949cb7572fb584130285c71a4d0da6ee9debc53e44264501698c973035e19d"} Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.217430 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-fc4kg" event={"ID":"9371580e-96d9-4d4e-96ef-d049476af5eb","Type":"ContainerDied","Data":"fa0c23f2da6368827a257d261404d244e8f8afbcb4f5d3ef21127b5f0f6e26a0"} Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.217469 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa0c23f2da6368827a257d261404d244e8f8afbcb4f5d3ef21127b5f0f6e26a0" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.217546 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-fc4kg" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.237094 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gpffz" event={"ID":"d0f26834-34b1-4d79-a777-d080ed1eb981","Type":"ContainerDied","Data":"a44e848f62f6bc567dd58dc738319011baa3ef8f2b6c7b3d0fd249884b9db241"} Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.237127 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a44e848f62f6bc567dd58dc738319011baa3ef8f2b6c7b3d0fd249884b9db241" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.237188 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gpffz" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.266875 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-md2sc" podStartSLOduration=2.765443533 podStartE2EDuration="10.266852299s" podCreationTimestamp="2025-12-12 16:11:48 +0000 UTC" firstStartedPulling="2025-12-12 16:11:49.732441805 +0000 UTC m=+1536.901081406" lastFinishedPulling="2025-12-12 16:11:57.233850571 +0000 UTC m=+1544.402490172" observedRunningTime="2025-12-12 16:11:58.236710418 +0000 UTC m=+1545.405350019" watchObservedRunningTime="2025-12-12 16:11:58.266852299 +0000 UTC m=+1545.435491900" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.486499 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7z76l" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.578048 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cxtq\" (UniqueName: \"kubernetes.io/projected/52f33022-5f32-4ca6-bc92-5753d41cd038-kube-api-access-5cxtq\") pod \"52f33022-5f32-4ca6-bc92-5753d41cd038\" (UID: \"52f33022-5f32-4ca6-bc92-5753d41cd038\") " Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.578110 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-dns-svc\") pod \"52f33022-5f32-4ca6-bc92-5753d41cd038\" (UID: \"52f33022-5f32-4ca6-bc92-5753d41cd038\") " Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.578362 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-config\") pod \"52f33022-5f32-4ca6-bc92-5753d41cd038\" (UID: \"52f33022-5f32-4ca6-bc92-5753d41cd038\") " Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.578437 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-ovsdbserver-sb\") pod \"52f33022-5f32-4ca6-bc92-5753d41cd038\" (UID: \"52f33022-5f32-4ca6-bc92-5753d41cd038\") " Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.579058 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-ovsdbserver-nb\") pod \"52f33022-5f32-4ca6-bc92-5753d41cd038\" (UID: \"52f33022-5f32-4ca6-bc92-5753d41cd038\") " Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.588609 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52f33022-5f32-4ca6-bc92-5753d41cd038-kube-api-access-5cxtq" (OuterVolumeSpecName: "kube-api-access-5cxtq") pod "52f33022-5f32-4ca6-bc92-5753d41cd038" (UID: "52f33022-5f32-4ca6-bc92-5753d41cd038"). InnerVolumeSpecName "kube-api-access-5cxtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.637970 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "52f33022-5f32-4ca6-bc92-5753d41cd038" (UID: "52f33022-5f32-4ca6-bc92-5753d41cd038"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.650735 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "52f33022-5f32-4ca6-bc92-5753d41cd038" (UID: "52f33022-5f32-4ca6-bc92-5753d41cd038"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.669919 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "52f33022-5f32-4ca6-bc92-5753d41cd038" (UID: "52f33022-5f32-4ca6-bc92-5753d41cd038"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.676036 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-config" (OuterVolumeSpecName: "config") pod "52f33022-5f32-4ca6-bc92-5753d41cd038" (UID: "52f33022-5f32-4ca6-bc92-5753d41cd038"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.684986 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.685023 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.685068 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.685079 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cxtq\" (UniqueName: \"kubernetes.io/projected/52f33022-5f32-4ca6-bc92-5753d41cd038-kube-api-access-5cxtq\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:58 crc kubenswrapper[4693]: I1212 16:11:58.685087 4693 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52f33022-5f32-4ca6-bc92-5753d41cd038-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 16:11:59 crc kubenswrapper[4693]: I1212 16:11:59.249389 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7z76l" event={"ID":"52f33022-5f32-4ca6-bc92-5753d41cd038","Type":"ContainerDied","Data":"57fa5f06934eee02a066527d644e387576a36634299f0fe2f7446cde271318ec"} Dec 12 16:11:59 crc kubenswrapper[4693]: I1212 16:11:59.249430 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7z76l" Dec 12 16:11:59 crc kubenswrapper[4693]: I1212 16:11:59.249479 4693 scope.go:117] "RemoveContainer" containerID="f8e2be273ad420bdb52a1c546cfb1d5a5358b756b210334bdfd4cff0c1e75dd1" Dec 12 16:11:59 crc kubenswrapper[4693]: I1212 16:11:59.250916 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r9p8t" event={"ID":"56fb0f10-fbce-4aed-9a10-7128021ce48f","Type":"ContainerStarted","Data":"28fd08091e76687fb5188f41be0dea9c01a9ba08ff14b73e9d3358907e805303"} Dec 12 16:11:59 crc kubenswrapper[4693]: I1212 16:11:59.279116 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-r9p8t" podStartSLOduration=4.881676018 podStartE2EDuration="39.279091797s" podCreationTimestamp="2025-12-12 16:11:20 +0000 UTC" firstStartedPulling="2025-12-12 16:11:22.835764264 +0000 UTC m=+1510.004403875" lastFinishedPulling="2025-12-12 16:11:57.233180053 +0000 UTC m=+1544.401819654" observedRunningTime="2025-12-12 16:11:59.270872306 +0000 UTC m=+1546.439511907" watchObservedRunningTime="2025-12-12 16:11:59.279091797 +0000 UTC m=+1546.447731408" Dec 12 16:11:59 crc kubenswrapper[4693]: I1212 16:11:59.290713 4693 scope.go:117] "RemoveContainer" containerID="b7553a4dec27c71fbcd14aebfe5f357a4f40de90656c086582e6466a9a660ac0" Dec 12 16:11:59 crc kubenswrapper[4693]: I1212 16:11:59.312751 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7z76l"] Dec 12 16:11:59 crc kubenswrapper[4693]: I1212 16:11:59.323434 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7z76l"] Dec 12 16:11:59 crc kubenswrapper[4693]: I1212 16:11:59.387095 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52f33022-5f32-4ca6-bc92-5753d41cd038" path="/var/lib/kubelet/pods/52f33022-5f32-4ca6-bc92-5753d41cd038/volumes" Dec 12 16:11:59 crc kubenswrapper[4693]: I1212 16:11:59.923497 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 12 16:11:59 crc kubenswrapper[4693]: I1212 16:11:59.931743 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 12 16:12:00 crc kubenswrapper[4693]: I1212 16:12:00.265885 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 12 16:12:03 crc kubenswrapper[4693]: I1212 16:12:03.308506 4693 generic.go:334] "Generic (PLEG): container finished" podID="cb7abcfa-6215-4393-ab01-7710ddd3055d" containerID="22949cb7572fb584130285c71a4d0da6ee9debc53e44264501698c973035e19d" exitCode=0 Dec 12 16:12:03 crc kubenswrapper[4693]: I1212 16:12:03.308755 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-md2sc" event={"ID":"cb7abcfa-6215-4393-ab01-7710ddd3055d","Type":"ContainerDied","Data":"22949cb7572fb584130285c71a4d0da6ee9debc53e44264501698c973035e19d"} Dec 12 16:12:04 crc kubenswrapper[4693]: I1212 16:12:04.800725 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-md2sc" Dec 12 16:12:04 crc kubenswrapper[4693]: I1212 16:12:04.921138 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tt8h\" (UniqueName: \"kubernetes.io/projected/cb7abcfa-6215-4393-ab01-7710ddd3055d-kube-api-access-2tt8h\") pod \"cb7abcfa-6215-4393-ab01-7710ddd3055d\" (UID: \"cb7abcfa-6215-4393-ab01-7710ddd3055d\") " Dec 12 16:12:04 crc kubenswrapper[4693]: I1212 16:12:04.921322 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7abcfa-6215-4393-ab01-7710ddd3055d-combined-ca-bundle\") pod \"cb7abcfa-6215-4393-ab01-7710ddd3055d\" (UID: \"cb7abcfa-6215-4393-ab01-7710ddd3055d\") " Dec 12 16:12:04 crc kubenswrapper[4693]: I1212 16:12:04.921417 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb7abcfa-6215-4393-ab01-7710ddd3055d-config-data\") pod \"cb7abcfa-6215-4393-ab01-7710ddd3055d\" (UID: \"cb7abcfa-6215-4393-ab01-7710ddd3055d\") " Dec 12 16:12:04 crc kubenswrapper[4693]: I1212 16:12:04.930800 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb7abcfa-6215-4393-ab01-7710ddd3055d-kube-api-access-2tt8h" (OuterVolumeSpecName: "kube-api-access-2tt8h") pod "cb7abcfa-6215-4393-ab01-7710ddd3055d" (UID: "cb7abcfa-6215-4393-ab01-7710ddd3055d"). InnerVolumeSpecName "kube-api-access-2tt8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:12:04 crc kubenswrapper[4693]: I1212 16:12:04.970742 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb7abcfa-6215-4393-ab01-7710ddd3055d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb7abcfa-6215-4393-ab01-7710ddd3055d" (UID: "cb7abcfa-6215-4393-ab01-7710ddd3055d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:04 crc kubenswrapper[4693]: I1212 16:12:04.995140 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb7abcfa-6215-4393-ab01-7710ddd3055d-config-data" (OuterVolumeSpecName: "config-data") pod "cb7abcfa-6215-4393-ab01-7710ddd3055d" (UID: "cb7abcfa-6215-4393-ab01-7710ddd3055d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.024721 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb7abcfa-6215-4393-ab01-7710ddd3055d-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.025067 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tt8h\" (UniqueName: \"kubernetes.io/projected/cb7abcfa-6215-4393-ab01-7710ddd3055d-kube-api-access-2tt8h\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.025175 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7abcfa-6215-4393-ab01-7710ddd3055d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.331247 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-md2sc" event={"ID":"cb7abcfa-6215-4393-ab01-7710ddd3055d","Type":"ContainerDied","Data":"b22383e9497edfc6ff406bb2152a1a31f47e5b17e65935e304d01e6d51b30e58"} Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.331301 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b22383e9497edfc6ff406bb2152a1a31f47e5b17e65935e304d01e6d51b30e58" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.331350 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-md2sc" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.634939 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4bwxm"] Dec 12 16:12:05 crc kubenswrapper[4693]: E1212 16:12:05.635813 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f33022-5f32-4ca6-bc92-5753d41cd038" containerName="init" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.635832 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f33022-5f32-4ca6-bc92-5753d41cd038" containerName="init" Dec 12 16:12:05 crc kubenswrapper[4693]: E1212 16:12:05.635856 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92ff8de1-8616-43f7-8bd4-1de3f4730b5f" containerName="mariadb-account-create-update" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.635866 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="92ff8de1-8616-43f7-8bd4-1de3f4730b5f" containerName="mariadb-account-create-update" Dec 12 16:12:05 crc kubenswrapper[4693]: E1212 16:12:05.635879 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7abcfa-6215-4393-ab01-7710ddd3055d" containerName="keystone-db-sync" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.635888 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7abcfa-6215-4393-ab01-7710ddd3055d" containerName="keystone-db-sync" Dec 12 16:12:05 crc kubenswrapper[4693]: E1212 16:12:05.635902 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f60beb5-6256-43ae-9df7-c57bb4f2d27e" containerName="mariadb-account-create-update" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.635910 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f60beb5-6256-43ae-9df7-c57bb4f2d27e" containerName="mariadb-account-create-update" Dec 12 16:12:05 crc kubenswrapper[4693]: E1212 16:12:05.635917 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9371580e-96d9-4d4e-96ef-d049476af5eb" containerName="mariadb-database-create" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.635924 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="9371580e-96d9-4d4e-96ef-d049476af5eb" containerName="mariadb-database-create" Dec 12 16:12:05 crc kubenswrapper[4693]: E1212 16:12:05.635942 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f33022-5f32-4ca6-bc92-5753d41cd038" containerName="dnsmasq-dns" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.635950 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f33022-5f32-4ca6-bc92-5753d41cd038" containerName="dnsmasq-dns" Dec 12 16:12:05 crc kubenswrapper[4693]: E1212 16:12:05.635965 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8579a0a1-a803-4422-a697-02b34eb25fae" containerName="mariadb-account-create-update" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.635971 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="8579a0a1-a803-4422-a697-02b34eb25fae" containerName="mariadb-account-create-update" Dec 12 16:12:05 crc kubenswrapper[4693]: E1212 16:12:05.635982 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f2d3ae1-1fd0-4058-9723-e136c74ee739" containerName="mariadb-database-create" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.635989 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f2d3ae1-1fd0-4058-9723-e136c74ee739" containerName="mariadb-database-create" Dec 12 16:12:05 crc kubenswrapper[4693]: E1212 16:12:05.636000 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0f26834-34b1-4d79-a777-d080ed1eb981" containerName="mariadb-database-create" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.636007 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f26834-34b1-4d79-a777-d080ed1eb981" containerName="mariadb-database-create" Dec 12 16:12:05 crc kubenswrapper[4693]: E1212 16:12:05.636022 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f36c30f6-251f-4311-b1cd-0094934a275a" containerName="mariadb-account-create-update" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.636030 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f36c30f6-251f-4311-b1cd-0094934a275a" containerName="mariadb-account-create-update" Dec 12 16:12:05 crc kubenswrapper[4693]: E1212 16:12:05.636055 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ae148a-761b-42e5-b88f-909c600a34fe" containerName="mariadb-database-create" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.636063 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ae148a-761b-42e5-b88f-909c600a34fe" containerName="mariadb-database-create" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.636324 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="92ff8de1-8616-43f7-8bd4-1de3f4730b5f" containerName="mariadb-account-create-update" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.636348 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f2d3ae1-1fd0-4058-9723-e136c74ee739" containerName="mariadb-database-create" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.636376 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0f26834-34b1-4d79-a777-d080ed1eb981" containerName="mariadb-database-create" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.636391 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="f36c30f6-251f-4311-b1cd-0094934a275a" containerName="mariadb-account-create-update" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.636404 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb7abcfa-6215-4393-ab01-7710ddd3055d" containerName="keystone-db-sync" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.636419 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0ae148a-761b-42e5-b88f-909c600a34fe" containerName="mariadb-database-create" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.636427 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="8579a0a1-a803-4422-a697-02b34eb25fae" containerName="mariadb-account-create-update" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.636439 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f33022-5f32-4ca6-bc92-5753d41cd038" containerName="dnsmasq-dns" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.636458 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f60beb5-6256-43ae-9df7-c57bb4f2d27e" containerName="mariadb-account-create-update" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.636474 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="9371580e-96d9-4d4e-96ef-d049476af5eb" containerName="mariadb-database-create" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.637256 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4bwxm" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.639908 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.640598 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.640736 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.642125 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9xp5g" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.645120 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.656262 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4bwxm"] Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.670108 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-z279d"] Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.673050 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-z279d" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.700343 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-z279d"] Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.737568 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-fernet-keys\") pod \"keystone-bootstrap-4bwxm\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " pod="openstack/keystone-bootstrap-4bwxm" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.737606 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-combined-ca-bundle\") pod \"keystone-bootstrap-4bwxm\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " pod="openstack/keystone-bootstrap-4bwxm" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.737640 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-config-data\") pod \"keystone-bootstrap-4bwxm\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " pod="openstack/keystone-bootstrap-4bwxm" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.737664 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-credential-keys\") pod \"keystone-bootstrap-4bwxm\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " pod="openstack/keystone-bootstrap-4bwxm" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.737690 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-scripts\") pod \"keystone-bootstrap-4bwxm\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " pod="openstack/keystone-bootstrap-4bwxm" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.737817 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rqn9\" (UniqueName: \"kubernetes.io/projected/ae72b152-13a7-4392-a111-2a2d1db03a9e-kube-api-access-2rqn9\") pod \"keystone-bootstrap-4bwxm\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " pod="openstack/keystone-bootstrap-4bwxm" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.778592 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-wrwhd"] Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.779973 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-wrwhd" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.782342 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-kn7cj" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.786542 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.798480 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-wrwhd"] Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.839932 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-fernet-keys\") pod \"keystone-bootstrap-4bwxm\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " pod="openstack/keystone-bootstrap-4bwxm" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.840470 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-z279d\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " pod="openstack/dnsmasq-dns-5959f8865f-z279d" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.840557 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-combined-ca-bundle\") pod \"keystone-bootstrap-4bwxm\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " pod="openstack/keystone-bootstrap-4bwxm" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.840637 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-config-data\") pod \"keystone-bootstrap-4bwxm\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " pod="openstack/keystone-bootstrap-4bwxm" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.840711 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-credential-keys\") pod \"keystone-bootstrap-4bwxm\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " pod="openstack/keystone-bootstrap-4bwxm" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.840803 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-scripts\") pod \"keystone-bootstrap-4bwxm\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " pod="openstack/keystone-bootstrap-4bwxm" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.840883 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rqn9\" (UniqueName: \"kubernetes.io/projected/ae72b152-13a7-4392-a111-2a2d1db03a9e-kube-api-access-2rqn9\") pod \"keystone-bootstrap-4bwxm\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " pod="openstack/keystone-bootstrap-4bwxm" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.841036 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-z279d\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " pod="openstack/dnsmasq-dns-5959f8865f-z279d" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.841118 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-config\") pod \"dnsmasq-dns-5959f8865f-z279d\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " pod="openstack/dnsmasq-dns-5959f8865f-z279d" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.841258 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqgrf\" (UniqueName: \"kubernetes.io/projected/4a38ed63-d27f-43cd-aa41-a6804bf83904-kube-api-access-vqgrf\") pod \"dnsmasq-dns-5959f8865f-z279d\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " pod="openstack/dnsmasq-dns-5959f8865f-z279d" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.841395 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-dns-svc\") pod \"dnsmasq-dns-5959f8865f-z279d\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " pod="openstack/dnsmasq-dns-5959f8865f-z279d" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.841517 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-z279d\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " pod="openstack/dnsmasq-dns-5959f8865f-z279d" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.845065 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-credential-keys\") pod \"keystone-bootstrap-4bwxm\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " pod="openstack/keystone-bootstrap-4bwxm" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.845542 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-config-data\") pod \"keystone-bootstrap-4bwxm\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " pod="openstack/keystone-bootstrap-4bwxm" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.851912 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-fernet-keys\") pod \"keystone-bootstrap-4bwxm\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " pod="openstack/keystone-bootstrap-4bwxm" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.857464 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-combined-ca-bundle\") pod \"keystone-bootstrap-4bwxm\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " pod="openstack/keystone-bootstrap-4bwxm" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.876507 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-scripts\") pod \"keystone-bootstrap-4bwxm\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " pod="openstack/keystone-bootstrap-4bwxm" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.883975 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-55c8q"] Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.885415 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-55c8q" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.891847 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.892181 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.892317 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xm9jc" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.921977 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-55c8q"] Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.929782 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rqn9\" (UniqueName: \"kubernetes.io/projected/ae72b152-13a7-4392-a111-2a2d1db03a9e-kube-api-access-2rqn9\") pod \"keystone-bootstrap-4bwxm\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " pod="openstack/keystone-bootstrap-4bwxm" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.941580 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-6fb94"] Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.942951 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6fb94" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.944625 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-dns-svc\") pod \"dnsmasq-dns-5959f8865f-z279d\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " pod="openstack/dnsmasq-dns-5959f8865f-z279d" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.944674 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-z279d\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " pod="openstack/dnsmasq-dns-5959f8865f-z279d" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.944706 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-z279d\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " pod="openstack/dnsmasq-dns-5959f8865f-z279d" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.944779 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw8hp\" (UniqueName: \"kubernetes.io/projected/f9bc2e28-21b6-42e1-a680-92426ae37ecf-kube-api-access-gw8hp\") pod \"heat-db-sync-wrwhd\" (UID: \"f9bc2e28-21b6-42e1-a680-92426ae37ecf\") " pod="openstack/heat-db-sync-wrwhd" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.944844 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-z279d\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " pod="openstack/dnsmasq-dns-5959f8865f-z279d" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.944869 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-config\") pod \"dnsmasq-dns-5959f8865f-z279d\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " pod="openstack/dnsmasq-dns-5959f8865f-z279d" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.944897 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9bc2e28-21b6-42e1-a680-92426ae37ecf-config-data\") pod \"heat-db-sync-wrwhd\" (UID: \"f9bc2e28-21b6-42e1-a680-92426ae37ecf\") " pod="openstack/heat-db-sync-wrwhd" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.944949 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9bc2e28-21b6-42e1-a680-92426ae37ecf-combined-ca-bundle\") pod \"heat-db-sync-wrwhd\" (UID: \"f9bc2e28-21b6-42e1-a680-92426ae37ecf\") " pod="openstack/heat-db-sync-wrwhd" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.944979 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqgrf\" (UniqueName: \"kubernetes.io/projected/4a38ed63-d27f-43cd-aa41-a6804bf83904-kube-api-access-vqgrf\") pod \"dnsmasq-dns-5959f8865f-z279d\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " pod="openstack/dnsmasq-dns-5959f8865f-z279d" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.944981 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.945659 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-z279d\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " pod="openstack/dnsmasq-dns-5959f8865f-z279d" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.946145 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-dns-svc\") pod \"dnsmasq-dns-5959f8865f-z279d\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " pod="openstack/dnsmasq-dns-5959f8865f-z279d" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.946309 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.946921 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-config\") pod \"dnsmasq-dns-5959f8865f-z279d\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " pod="openstack/dnsmasq-dns-5959f8865f-z279d" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.946945 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-z279d\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " pod="openstack/dnsmasq-dns-5959f8865f-z279d" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.947038 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-lh46s" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.951419 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-z279d\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " pod="openstack/dnsmasq-dns-5959f8865f-z279d" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.959739 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4bwxm" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.968613 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqgrf\" (UniqueName: \"kubernetes.io/projected/4a38ed63-d27f-43cd-aa41-a6804bf83904-kube-api-access-vqgrf\") pod \"dnsmasq-dns-5959f8865f-z279d\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " pod="openstack/dnsmasq-dns-5959f8865f-z279d" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.980843 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-pxq6t"] Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.982397 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pxq6t" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.989840 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.990063 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-96f49" Dec 12 16:12:05 crc kubenswrapper[4693]: I1212 16:12:05.993841 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-z279d" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.012411 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6fb94"] Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.039052 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pxq6t"] Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.048026 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9bc2e28-21b6-42e1-a680-92426ae37ecf-config-data\") pod \"heat-db-sync-wrwhd\" (UID: \"f9bc2e28-21b6-42e1-a680-92426ae37ecf\") " pod="openstack/heat-db-sync-wrwhd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.048069 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-scripts\") pod \"cinder-db-sync-6fb94\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " pod="openstack/cinder-db-sync-6fb94" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.048112 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-db-sync-config-data\") pod \"cinder-db-sync-6fb94\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " pod="openstack/cinder-db-sync-6fb94" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.048135 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrv2s\" (UniqueName: \"kubernetes.io/projected/a29594ee-7274-4690-a79d-ef6bd3a8a2fd-kube-api-access-jrv2s\") pod \"neutron-db-sync-55c8q\" (UID: \"a29594ee-7274-4690-a79d-ef6bd3a8a2fd\") " pod="openstack/neutron-db-sync-55c8q" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.048151 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9bc2e28-21b6-42e1-a680-92426ae37ecf-combined-ca-bundle\") pod \"heat-db-sync-wrwhd\" (UID: \"f9bc2e28-21b6-42e1-a680-92426ae37ecf\") " pod="openstack/heat-db-sync-wrwhd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.048168 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-combined-ca-bundle\") pod \"cinder-db-sync-6fb94\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " pod="openstack/cinder-db-sync-6fb94" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.048202 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv74m\" (UniqueName: \"kubernetes.io/projected/42ae7c15-9f4d-4ef8-83d7-279226e74846-kube-api-access-vv74m\") pod \"cinder-db-sync-6fb94\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " pod="openstack/cinder-db-sync-6fb94" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.048225 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a29594ee-7274-4690-a79d-ef6bd3a8a2fd-combined-ca-bundle\") pod \"neutron-db-sync-55c8q\" (UID: \"a29594ee-7274-4690-a79d-ef6bd3a8a2fd\") " pod="openstack/neutron-db-sync-55c8q" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.048263 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-config-data\") pod \"cinder-db-sync-6fb94\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " pod="openstack/cinder-db-sync-6fb94" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.048317 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a29594ee-7274-4690-a79d-ef6bd3a8a2fd-config\") pod \"neutron-db-sync-55c8q\" (UID: \"a29594ee-7274-4690-a79d-ef6bd3a8a2fd\") " pod="openstack/neutron-db-sync-55c8q" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.048346 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw8hp\" (UniqueName: \"kubernetes.io/projected/f9bc2e28-21b6-42e1-a680-92426ae37ecf-kube-api-access-gw8hp\") pod \"heat-db-sync-wrwhd\" (UID: \"f9bc2e28-21b6-42e1-a680-92426ae37ecf\") " pod="openstack/heat-db-sync-wrwhd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.048379 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42ae7c15-9f4d-4ef8-83d7-279226e74846-etc-machine-id\") pod \"cinder-db-sync-6fb94\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " pod="openstack/cinder-db-sync-6fb94" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.054232 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9bc2e28-21b6-42e1-a680-92426ae37ecf-combined-ca-bundle\") pod \"heat-db-sync-wrwhd\" (UID: \"f9bc2e28-21b6-42e1-a680-92426ae37ecf\") " pod="openstack/heat-db-sync-wrwhd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.057089 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9bc2e28-21b6-42e1-a680-92426ae37ecf-config-data\") pod \"heat-db-sync-wrwhd\" (UID: \"f9bc2e28-21b6-42e1-a680-92426ae37ecf\") " pod="openstack/heat-db-sync-wrwhd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.096968 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw8hp\" (UniqueName: \"kubernetes.io/projected/f9bc2e28-21b6-42e1-a680-92426ae37ecf-kube-api-access-gw8hp\") pod \"heat-db-sync-wrwhd\" (UID: \"f9bc2e28-21b6-42e1-a680-92426ae37ecf\") " pod="openstack/heat-db-sync-wrwhd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.107159 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-wrwhd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.128683 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-z279d"] Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.152956 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv74m\" (UniqueName: \"kubernetes.io/projected/42ae7c15-9f4d-4ef8-83d7-279226e74846-kube-api-access-vv74m\") pod \"cinder-db-sync-6fb94\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " pod="openstack/cinder-db-sync-6fb94" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.153000 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a29594ee-7274-4690-a79d-ef6bd3a8a2fd-combined-ca-bundle\") pod \"neutron-db-sync-55c8q\" (UID: \"a29594ee-7274-4690-a79d-ef6bd3a8a2fd\") " pod="openstack/neutron-db-sync-55c8q" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.153065 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-config-data\") pod \"cinder-db-sync-6fb94\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " pod="openstack/cinder-db-sync-6fb94" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.153090 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f6afc80-5a96-44ee-98c0-89a474913867-db-sync-config-data\") pod \"barbican-db-sync-pxq6t\" (UID: \"1f6afc80-5a96-44ee-98c0-89a474913867\") " pod="openstack/barbican-db-sync-pxq6t" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.153125 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6afc80-5a96-44ee-98c0-89a474913867-combined-ca-bundle\") pod \"barbican-db-sync-pxq6t\" (UID: \"1f6afc80-5a96-44ee-98c0-89a474913867\") " pod="openstack/barbican-db-sync-pxq6t" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.153162 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk66h\" (UniqueName: \"kubernetes.io/projected/1f6afc80-5a96-44ee-98c0-89a474913867-kube-api-access-bk66h\") pod \"barbican-db-sync-pxq6t\" (UID: \"1f6afc80-5a96-44ee-98c0-89a474913867\") " pod="openstack/barbican-db-sync-pxq6t" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.153194 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a29594ee-7274-4690-a79d-ef6bd3a8a2fd-config\") pod \"neutron-db-sync-55c8q\" (UID: \"a29594ee-7274-4690-a79d-ef6bd3a8a2fd\") " pod="openstack/neutron-db-sync-55c8q" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.153245 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42ae7c15-9f4d-4ef8-83d7-279226e74846-etc-machine-id\") pod \"cinder-db-sync-6fb94\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " pod="openstack/cinder-db-sync-6fb94" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.153341 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-scripts\") pod \"cinder-db-sync-6fb94\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " pod="openstack/cinder-db-sync-6fb94" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.153396 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-db-sync-config-data\") pod \"cinder-db-sync-6fb94\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " pod="openstack/cinder-db-sync-6fb94" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.153419 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-combined-ca-bundle\") pod \"cinder-db-sync-6fb94\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " pod="openstack/cinder-db-sync-6fb94" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.153455 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrv2s\" (UniqueName: \"kubernetes.io/projected/a29594ee-7274-4690-a79d-ef6bd3a8a2fd-kube-api-access-jrv2s\") pod \"neutron-db-sync-55c8q\" (UID: \"a29594ee-7274-4690-a79d-ef6bd3a8a2fd\") " pod="openstack/neutron-db-sync-55c8q" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.160026 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42ae7c15-9f4d-4ef8-83d7-279226e74846-etc-machine-id\") pod \"cinder-db-sync-6fb94\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " pod="openstack/cinder-db-sync-6fb94" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.169099 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a29594ee-7274-4690-a79d-ef6bd3a8a2fd-combined-ca-bundle\") pod \"neutron-db-sync-55c8q\" (UID: \"a29594ee-7274-4690-a79d-ef6bd3a8a2fd\") " pod="openstack/neutron-db-sync-55c8q" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.169828 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a29594ee-7274-4690-a79d-ef6bd3a8a2fd-config\") pod \"neutron-db-sync-55c8q\" (UID: \"a29594ee-7274-4690-a79d-ef6bd3a8a2fd\") " pod="openstack/neutron-db-sync-55c8q" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.170147 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-combined-ca-bundle\") pod \"cinder-db-sync-6fb94\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " pod="openstack/cinder-db-sync-6fb94" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.174557 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-config-data\") pod \"cinder-db-sync-6fb94\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " pod="openstack/cinder-db-sync-6fb94" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.176982 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv74m\" (UniqueName: \"kubernetes.io/projected/42ae7c15-9f4d-4ef8-83d7-279226e74846-kube-api-access-vv74m\") pod \"cinder-db-sync-6fb94\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " pod="openstack/cinder-db-sync-6fb94" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.179857 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-scripts\") pod \"cinder-db-sync-6fb94\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " pod="openstack/cinder-db-sync-6fb94" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.190140 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-db-sync-config-data\") pod \"cinder-db-sync-6fb94\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " pod="openstack/cinder-db-sync-6fb94" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.191238 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6fb94" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.191677 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-26nbh"] Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.193025 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-26nbh" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.196516 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrv2s\" (UniqueName: \"kubernetes.io/projected/a29594ee-7274-4690-a79d-ef6bd3a8a2fd-kube-api-access-jrv2s\") pod \"neutron-db-sync-55c8q\" (UID: \"a29594ee-7274-4690-a79d-ef6bd3a8a2fd\") " pod="openstack/neutron-db-sync-55c8q" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.201729 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.201998 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-tqrpr" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.202214 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.217220 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-chngd"] Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.230989 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.259644 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f6afc80-5a96-44ee-98c0-89a474913867-db-sync-config-data\") pod \"barbican-db-sync-pxq6t\" (UID: \"1f6afc80-5a96-44ee-98c0-89a474913867\") " pod="openstack/barbican-db-sync-pxq6t" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.259709 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6afc80-5a96-44ee-98c0-89a474913867-combined-ca-bundle\") pod \"barbican-db-sync-pxq6t\" (UID: \"1f6afc80-5a96-44ee-98c0-89a474913867\") " pod="openstack/barbican-db-sync-pxq6t" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.259779 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk66h\" (UniqueName: \"kubernetes.io/projected/1f6afc80-5a96-44ee-98c0-89a474913867-kube-api-access-bk66h\") pod \"barbican-db-sync-pxq6t\" (UID: \"1f6afc80-5a96-44ee-98c0-89a474913867\") " pod="openstack/barbican-db-sync-pxq6t" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.264937 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f6afc80-5a96-44ee-98c0-89a474913867-db-sync-config-data\") pod \"barbican-db-sync-pxq6t\" (UID: \"1f6afc80-5a96-44ee-98c0-89a474913867\") " pod="openstack/barbican-db-sync-pxq6t" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.300017 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6afc80-5a96-44ee-98c0-89a474913867-combined-ca-bundle\") pod \"barbican-db-sync-pxq6t\" (UID: \"1f6afc80-5a96-44ee-98c0-89a474913867\") " pod="openstack/barbican-db-sync-pxq6t" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.300621 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk66h\" (UniqueName: \"kubernetes.io/projected/1f6afc80-5a96-44ee-98c0-89a474913867-kube-api-access-bk66h\") pod \"barbican-db-sync-pxq6t\" (UID: \"1f6afc80-5a96-44ee-98c0-89a474913867\") " pod="openstack/barbican-db-sync-pxq6t" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.301390 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-26nbh"] Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.351308 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-chngd"] Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.364464 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9560987d-9cb8-4361-9c44-3be630e46634-combined-ca-bundle\") pod \"placement-db-sync-26nbh\" (UID: \"9560987d-9cb8-4361-9c44-3be630e46634\") " pod="openstack/placement-db-sync-26nbh" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.364504 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-chngd\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.364526 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9560987d-9cb8-4361-9c44-3be630e46634-config-data\") pod \"placement-db-sync-26nbh\" (UID: \"9560987d-9cb8-4361-9c44-3be630e46634\") " pod="openstack/placement-db-sync-26nbh" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.364563 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-config\") pod \"dnsmasq-dns-58dd9ff6bc-chngd\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.364612 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-chngd\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.364629 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-chngd\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.364669 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9560987d-9cb8-4361-9c44-3be630e46634-scripts\") pod \"placement-db-sync-26nbh\" (UID: \"9560987d-9cb8-4361-9c44-3be630e46634\") " pod="openstack/placement-db-sync-26nbh" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.364693 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnwcm\" (UniqueName: \"kubernetes.io/projected/9560987d-9cb8-4361-9c44-3be630e46634-kube-api-access-rnwcm\") pod \"placement-db-sync-26nbh\" (UID: \"9560987d-9cb8-4361-9c44-3be630e46634\") " pod="openstack/placement-db-sync-26nbh" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.364781 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-chngd\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.364801 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9560987d-9cb8-4361-9c44-3be630e46634-logs\") pod \"placement-db-sync-26nbh\" (UID: \"9560987d-9cb8-4361-9c44-3be630e46634\") " pod="openstack/placement-db-sync-26nbh" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.364829 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm5lq\" (UniqueName: \"kubernetes.io/projected/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-kube-api-access-wm5lq\") pod \"dnsmasq-dns-58dd9ff6bc-chngd\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.384256 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.388308 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.390964 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.391430 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.391667 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.463188 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-55c8q" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.481757 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnwcm\" (UniqueName: \"kubernetes.io/projected/9560987d-9cb8-4361-9c44-3be630e46634-kube-api-access-rnwcm\") pod \"placement-db-sync-26nbh\" (UID: \"9560987d-9cb8-4361-9c44-3be630e46634\") " pod="openstack/placement-db-sync-26nbh" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.481971 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-chngd\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.482002 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9560987d-9cb8-4361-9c44-3be630e46634-logs\") pod \"placement-db-sync-26nbh\" (UID: \"9560987d-9cb8-4361-9c44-3be630e46634\") " pod="openstack/placement-db-sync-26nbh" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.482062 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm5lq\" (UniqueName: \"kubernetes.io/projected/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-kube-api-access-wm5lq\") pod \"dnsmasq-dns-58dd9ff6bc-chngd\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.482186 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9560987d-9cb8-4361-9c44-3be630e46634-combined-ca-bundle\") pod \"placement-db-sync-26nbh\" (UID: \"9560987d-9cb8-4361-9c44-3be630e46634\") " pod="openstack/placement-db-sync-26nbh" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.483147 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-chngd\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.483200 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9560987d-9cb8-4361-9c44-3be630e46634-config-data\") pod \"placement-db-sync-26nbh\" (UID: \"9560987d-9cb8-4361-9c44-3be630e46634\") " pod="openstack/placement-db-sync-26nbh" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.483313 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-config\") pod \"dnsmasq-dns-58dd9ff6bc-chngd\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.484894 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-chngd\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.484937 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-chngd\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.484996 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9560987d-9cb8-4361-9c44-3be630e46634-scripts\") pod \"placement-db-sync-26nbh\" (UID: \"9560987d-9cb8-4361-9c44-3be630e46634\") " pod="openstack/placement-db-sync-26nbh" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.488009 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-chngd\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.489719 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-chngd\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.490026 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9560987d-9cb8-4361-9c44-3be630e46634-logs\") pod \"placement-db-sync-26nbh\" (UID: \"9560987d-9cb8-4361-9c44-3be630e46634\") " pod="openstack/placement-db-sync-26nbh" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.496624 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-config\") pod \"dnsmasq-dns-58dd9ff6bc-chngd\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.500562 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-chngd\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.501108 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-chngd\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.501709 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9560987d-9cb8-4361-9c44-3be630e46634-combined-ca-bundle\") pod \"placement-db-sync-26nbh\" (UID: \"9560987d-9cb8-4361-9c44-3be630e46634\") " pod="openstack/placement-db-sync-26nbh" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.519319 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm5lq\" (UniqueName: \"kubernetes.io/projected/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-kube-api-access-wm5lq\") pod \"dnsmasq-dns-58dd9ff6bc-chngd\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.519713 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9560987d-9cb8-4361-9c44-3be630e46634-scripts\") pod \"placement-db-sync-26nbh\" (UID: \"9560987d-9cb8-4361-9c44-3be630e46634\") " pod="openstack/placement-db-sync-26nbh" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.524990 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9560987d-9cb8-4361-9c44-3be630e46634-config-data\") pod \"placement-db-sync-26nbh\" (UID: \"9560987d-9cb8-4361-9c44-3be630e46634\") " pod="openstack/placement-db-sync-26nbh" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.528573 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnwcm\" (UniqueName: \"kubernetes.io/projected/9560987d-9cb8-4361-9c44-3be630e46634-kube-api-access-rnwcm\") pod \"placement-db-sync-26nbh\" (UID: \"9560987d-9cb8-4361-9c44-3be630e46634\") " pod="openstack/placement-db-sync-26nbh" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.534006 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pxq6t" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.578852 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-26nbh" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.588587 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.588687 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn7tp\" (UniqueName: \"kubernetes.io/projected/fe4e1190-a9f2-4010-98d3-a41898274b56-kube-api-access-gn7tp\") pod \"ceilometer-0\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.588710 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-scripts\") pod \"ceilometer-0\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.588737 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.588806 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe4e1190-a9f2-4010-98d3-a41898274b56-log-httpd\") pod \"ceilometer-0\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.588853 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-config-data\") pod \"ceilometer-0\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.588906 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe4e1190-a9f2-4010-98d3-a41898274b56-run-httpd\") pod \"ceilometer-0\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.605894 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.663161 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-z279d"] Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.691173 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-config-data\") pod \"ceilometer-0\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.691246 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe4e1190-a9f2-4010-98d3-a41898274b56-run-httpd\") pod \"ceilometer-0\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.691311 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.691328 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn7tp\" (UniqueName: \"kubernetes.io/projected/fe4e1190-a9f2-4010-98d3-a41898274b56-kube-api-access-gn7tp\") pod \"ceilometer-0\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.691438 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-scripts\") pod \"ceilometer-0\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.691466 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.691535 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe4e1190-a9f2-4010-98d3-a41898274b56-log-httpd\") pod \"ceilometer-0\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.691919 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe4e1190-a9f2-4010-98d3-a41898274b56-log-httpd\") pod \"ceilometer-0\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.694565 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe4e1190-a9f2-4010-98d3-a41898274b56-run-httpd\") pod \"ceilometer-0\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.698411 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-config-data\") pod \"ceilometer-0\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.711531 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.715858 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn7tp\" (UniqueName: \"kubernetes.io/projected/fe4e1190-a9f2-4010-98d3-a41898274b56-kube-api-access-gn7tp\") pod \"ceilometer-0\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.772411 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-scripts\") pod \"ceilometer-0\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.775070 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.786428 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:12:06 crc kubenswrapper[4693]: I1212 16:12:06.988471 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4bwxm"] Dec 12 16:12:07 crc kubenswrapper[4693]: I1212 16:12:07.256671 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6fb94"] Dec 12 16:12:07 crc kubenswrapper[4693]: I1212 16:12:07.405510 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4bwxm" event={"ID":"ae72b152-13a7-4392-a111-2a2d1db03a9e","Type":"ContainerStarted","Data":"82943592f68ec84059ddd8b82d45d20c94d7399e765736c70a4e6ec7f3aa7fa0"} Dec 12 16:12:07 crc kubenswrapper[4693]: I1212 16:12:07.405897 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4bwxm" event={"ID":"ae72b152-13a7-4392-a111-2a2d1db03a9e","Type":"ContainerStarted","Data":"5850bfa12042fb400fcadad5e93fa6cb4f3139fa52ddd9164bea21325d093726"} Dec 12 16:12:07 crc kubenswrapper[4693]: I1212 16:12:07.412391 4693 generic.go:334] "Generic (PLEG): container finished" podID="4a38ed63-d27f-43cd-aa41-a6804bf83904" containerID="ca44bfce4074650c2f8742db2cfcedd02357a1de8a16b9676953d2342b5ad461" exitCode=0 Dec 12 16:12:07 crc kubenswrapper[4693]: I1212 16:12:07.412578 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-z279d" event={"ID":"4a38ed63-d27f-43cd-aa41-a6804bf83904","Type":"ContainerDied","Data":"ca44bfce4074650c2f8742db2cfcedd02357a1de8a16b9676953d2342b5ad461"} Dec 12 16:12:07 crc kubenswrapper[4693]: I1212 16:12:07.412649 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-z279d" event={"ID":"4a38ed63-d27f-43cd-aa41-a6804bf83904","Type":"ContainerStarted","Data":"c338c71244d2c421a2a17387348fb969a31b9e28d1d6fcae098c7644364ea949"} Dec 12 16:12:07 crc kubenswrapper[4693]: I1212 16:12:07.417682 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6fb94" event={"ID":"42ae7c15-9f4d-4ef8-83d7-279226e74846","Type":"ContainerStarted","Data":"4c6fce7ebc56547dd970db95a8621da834cd714adaa40b66d480684634fef58d"} Dec 12 16:12:07 crc kubenswrapper[4693]: W1212 16:12:07.441522 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda29594ee_7274_4690_a79d_ef6bd3a8a2fd.slice/crio-076f17cf3b02b7b44ca389611a3d5b64e2af7fb5329c6b547840f004fc784e72 WatchSource:0}: Error finding container 076f17cf3b02b7b44ca389611a3d5b64e2af7fb5329c6b547840f004fc784e72: Status 404 returned error can't find the container with id 076f17cf3b02b7b44ca389611a3d5b64e2af7fb5329c6b547840f004fc784e72 Dec 12 16:12:07 crc kubenswrapper[4693]: I1212 16:12:07.487362 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-wrwhd"] Dec 12 16:12:07 crc kubenswrapper[4693]: I1212 16:12:07.545013 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-55c8q"] Dec 12 16:12:07 crc kubenswrapper[4693]: I1212 16:12:07.582544 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4bwxm" podStartSLOduration=2.582519918 podStartE2EDuration="2.582519918s" podCreationTimestamp="2025-12-12 16:12:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:12:07.45879764 +0000 UTC m=+1554.627437241" watchObservedRunningTime="2025-12-12 16:12:07.582519918 +0000 UTC m=+1554.751159519" Dec 12 16:12:07 crc kubenswrapper[4693]: I1212 16:12:07.696226 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pxq6t"] Dec 12 16:12:07 crc kubenswrapper[4693]: I1212 16:12:07.729714 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-26nbh"] Dec 12 16:12:07 crc kubenswrapper[4693]: I1212 16:12:07.921868 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-chngd"] Dec 12 16:12:07 crc kubenswrapper[4693]: I1212 16:12:07.941776 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.308626 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-z279d" Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.441597 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-26nbh" event={"ID":"9560987d-9cb8-4361-9c44-3be630e46634","Type":"ContainerStarted","Data":"f5f9f5bd4dcbff5460fd86db0b1aac2dd978203c834c945c174b47373d1ac892"} Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.447145 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pxq6t" event={"ID":"1f6afc80-5a96-44ee-98c0-89a474913867","Type":"ContainerStarted","Data":"77874625108a10fb1fc054ee4e671e40804e222a6821967e3b09c69e9e296e17"} Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.452613 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-55c8q" event={"ID":"a29594ee-7274-4690-a79d-ef6bd3a8a2fd","Type":"ContainerStarted","Data":"dbd2364c35e67e76146c6f002481a8873206dc1d6bada39b3c602172035c8c65"} Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.452652 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-55c8q" event={"ID":"a29594ee-7274-4690-a79d-ef6bd3a8a2fd","Type":"ContainerStarted","Data":"076f17cf3b02b7b44ca389611a3d5b64e2af7fb5329c6b547840f004fc784e72"} Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.472476 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-z279d" event={"ID":"4a38ed63-d27f-43cd-aa41-a6804bf83904","Type":"ContainerDied","Data":"c338c71244d2c421a2a17387348fb969a31b9e28d1d6fcae098c7644364ea949"} Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.472529 4693 scope.go:117] "RemoveContainer" containerID="ca44bfce4074650c2f8742db2cfcedd02357a1de8a16b9676953d2342b5ad461" Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.472671 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-z279d" Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.475946 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-dns-swift-storage-0\") pod \"4a38ed63-d27f-43cd-aa41-a6804bf83904\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.476036 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-ovsdbserver-nb\") pod \"4a38ed63-d27f-43cd-aa41-a6804bf83904\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.476089 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-ovsdbserver-sb\") pod \"4a38ed63-d27f-43cd-aa41-a6804bf83904\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.476148 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-config\") pod \"4a38ed63-d27f-43cd-aa41-a6804bf83904\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.476192 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-dns-svc\") pod \"4a38ed63-d27f-43cd-aa41-a6804bf83904\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.476231 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqgrf\" (UniqueName: \"kubernetes.io/projected/4a38ed63-d27f-43cd-aa41-a6804bf83904-kube-api-access-vqgrf\") pod \"4a38ed63-d27f-43cd-aa41-a6804bf83904\" (UID: \"4a38ed63-d27f-43cd-aa41-a6804bf83904\") " Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.481778 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-wrwhd" event={"ID":"f9bc2e28-21b6-42e1-a680-92426ae37ecf","Type":"ContainerStarted","Data":"54118d49fe238afddfd78f4c7c96f85b9da701a5911374f1687612cd4b4e521a"} Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.505553 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" event={"ID":"3d6ab82c-1fed-49ec-b76a-ba30dd458f26","Type":"ContainerStarted","Data":"e7a75ae185cca5c1c0b4696f0b7795203edc8b43613ec84c75b70adf0617ba62"} Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.512670 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-55c8q" podStartSLOduration=3.512647898 podStartE2EDuration="3.512647898s" podCreationTimestamp="2025-12-12 16:12:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:12:08.480062292 +0000 UTC m=+1555.648701893" watchObservedRunningTime="2025-12-12 16:12:08.512647898 +0000 UTC m=+1555.681287499" Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.522962 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe4e1190-a9f2-4010-98d3-a41898274b56","Type":"ContainerStarted","Data":"8473d01318b7d92b6e7b6cc05ca9f33540a7e1794d24fa153f961f0ab3020104"} Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.527471 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a38ed63-d27f-43cd-aa41-a6804bf83904-kube-api-access-vqgrf" (OuterVolumeSpecName: "kube-api-access-vqgrf") pod "4a38ed63-d27f-43cd-aa41-a6804bf83904" (UID: "4a38ed63-d27f-43cd-aa41-a6804bf83904"). InnerVolumeSpecName "kube-api-access-vqgrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.580689 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqgrf\" (UniqueName: \"kubernetes.io/projected/4a38ed63-d27f-43cd-aa41-a6804bf83904-kube-api-access-vqgrf\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.588876 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-config" (OuterVolumeSpecName: "config") pod "4a38ed63-d27f-43cd-aa41-a6804bf83904" (UID: "4a38ed63-d27f-43cd-aa41-a6804bf83904"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.594369 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4a38ed63-d27f-43cd-aa41-a6804bf83904" (UID: "4a38ed63-d27f-43cd-aa41-a6804bf83904"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.605914 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.606822 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4a38ed63-d27f-43cd-aa41-a6804bf83904" (UID: "4a38ed63-d27f-43cd-aa41-a6804bf83904"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.627291 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4a38ed63-d27f-43cd-aa41-a6804bf83904" (UID: "4a38ed63-d27f-43cd-aa41-a6804bf83904"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.633070 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4a38ed63-d27f-43cd-aa41-a6804bf83904" (UID: "4a38ed63-d27f-43cd-aa41-a6804bf83904"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.682483 4693 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.682515 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.682526 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.682535 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.682547 4693 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a38ed63-d27f-43cd-aa41-a6804bf83904-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.878077 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-z279d"] Dec 12 16:12:08 crc kubenswrapper[4693]: I1212 16:12:08.908532 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-z279d"] Dec 12 16:12:09 crc kubenswrapper[4693]: I1212 16:12:09.380410 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a38ed63-d27f-43cd-aa41-a6804bf83904" path="/var/lib/kubelet/pods/4a38ed63-d27f-43cd-aa41-a6804bf83904/volumes" Dec 12 16:12:09 crc kubenswrapper[4693]: I1212 16:12:09.603166 4693 generic.go:334] "Generic (PLEG): container finished" podID="3d6ab82c-1fed-49ec-b76a-ba30dd458f26" containerID="daa05e1fc09891c0fc8aef47f3c06bf8eb018ddd5f4560eed7b024f7c7bfac88" exitCode=0 Dec 12 16:12:09 crc kubenswrapper[4693]: I1212 16:12:09.603317 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" event={"ID":"3d6ab82c-1fed-49ec-b76a-ba30dd458f26","Type":"ContainerDied","Data":"daa05e1fc09891c0fc8aef47f3c06bf8eb018ddd5f4560eed7b024f7c7bfac88"} Dec 12 16:12:10 crc kubenswrapper[4693]: I1212 16:12:10.645499 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" event={"ID":"3d6ab82c-1fed-49ec-b76a-ba30dd458f26","Type":"ContainerStarted","Data":"38760689a66bed25a0cffc2c3c10fac796a5863301b3c8867251d48b3ebec7c8"} Dec 12 16:12:10 crc kubenswrapper[4693]: I1212 16:12:10.646050 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:10 crc kubenswrapper[4693]: I1212 16:12:10.673197 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" podStartSLOduration=4.673177315 podStartE2EDuration="4.673177315s" podCreationTimestamp="2025-12-12 16:12:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:12:10.664941434 +0000 UTC m=+1557.833581035" watchObservedRunningTime="2025-12-12 16:12:10.673177315 +0000 UTC m=+1557.841816916" Dec 12 16:12:12 crc kubenswrapper[4693]: I1212 16:12:12.534055 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:12:12 crc kubenswrapper[4693]: I1212 16:12:12.534570 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:12:13 crc kubenswrapper[4693]: I1212 16:12:13.691844 4693 generic.go:334] "Generic (PLEG): container finished" podID="56fb0f10-fbce-4aed-9a10-7128021ce48f" containerID="28fd08091e76687fb5188f41be0dea9c01a9ba08ff14b73e9d3358907e805303" exitCode=0 Dec 12 16:12:13 crc kubenswrapper[4693]: I1212 16:12:13.691990 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r9p8t" event={"ID":"56fb0f10-fbce-4aed-9a10-7128021ce48f","Type":"ContainerDied","Data":"28fd08091e76687fb5188f41be0dea9c01a9ba08ff14b73e9d3358907e805303"} Dec 12 16:12:14 crc kubenswrapper[4693]: I1212 16:12:14.706799 4693 generic.go:334] "Generic (PLEG): container finished" podID="ae72b152-13a7-4392-a111-2a2d1db03a9e" containerID="82943592f68ec84059ddd8b82d45d20c94d7399e765736c70a4e6ec7f3aa7fa0" exitCode=0 Dec 12 16:12:14 crc kubenswrapper[4693]: I1212 16:12:14.708512 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4bwxm" event={"ID":"ae72b152-13a7-4392-a111-2a2d1db03a9e","Type":"ContainerDied","Data":"82943592f68ec84059ddd8b82d45d20c94d7399e765736c70a4e6ec7f3aa7fa0"} Dec 12 16:12:15 crc kubenswrapper[4693]: I1212 16:12:15.437979 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7ht5v"] Dec 12 16:12:15 crc kubenswrapper[4693]: E1212 16:12:15.439379 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a38ed63-d27f-43cd-aa41-a6804bf83904" containerName="init" Dec 12 16:12:15 crc kubenswrapper[4693]: I1212 16:12:15.439407 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a38ed63-d27f-43cd-aa41-a6804bf83904" containerName="init" Dec 12 16:12:15 crc kubenswrapper[4693]: I1212 16:12:15.439780 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a38ed63-d27f-43cd-aa41-a6804bf83904" containerName="init" Dec 12 16:12:15 crc kubenswrapper[4693]: I1212 16:12:15.442191 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ht5v" Dec 12 16:12:15 crc kubenswrapper[4693]: I1212 16:12:15.459675 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7ht5v"] Dec 12 16:12:15 crc kubenswrapper[4693]: I1212 16:12:15.523556 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f907e8ff-aa6a-44c2-a4ca-d73203442782-utilities\") pod \"community-operators-7ht5v\" (UID: \"f907e8ff-aa6a-44c2-a4ca-d73203442782\") " pod="openshift-marketplace/community-operators-7ht5v" Dec 12 16:12:15 crc kubenswrapper[4693]: I1212 16:12:15.523615 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hftc\" (UniqueName: \"kubernetes.io/projected/f907e8ff-aa6a-44c2-a4ca-d73203442782-kube-api-access-2hftc\") pod \"community-operators-7ht5v\" (UID: \"f907e8ff-aa6a-44c2-a4ca-d73203442782\") " pod="openshift-marketplace/community-operators-7ht5v" Dec 12 16:12:15 crc kubenswrapper[4693]: I1212 16:12:15.523961 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f907e8ff-aa6a-44c2-a4ca-d73203442782-catalog-content\") pod \"community-operators-7ht5v\" (UID: \"f907e8ff-aa6a-44c2-a4ca-d73203442782\") " pod="openshift-marketplace/community-operators-7ht5v" Dec 12 16:12:15 crc kubenswrapper[4693]: I1212 16:12:15.625547 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f907e8ff-aa6a-44c2-a4ca-d73203442782-utilities\") pod \"community-operators-7ht5v\" (UID: \"f907e8ff-aa6a-44c2-a4ca-d73203442782\") " pod="openshift-marketplace/community-operators-7ht5v" Dec 12 16:12:15 crc kubenswrapper[4693]: I1212 16:12:15.625881 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hftc\" (UniqueName: \"kubernetes.io/projected/f907e8ff-aa6a-44c2-a4ca-d73203442782-kube-api-access-2hftc\") pod \"community-operators-7ht5v\" (UID: \"f907e8ff-aa6a-44c2-a4ca-d73203442782\") " pod="openshift-marketplace/community-operators-7ht5v" Dec 12 16:12:15 crc kubenswrapper[4693]: I1212 16:12:15.625990 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f907e8ff-aa6a-44c2-a4ca-d73203442782-catalog-content\") pod \"community-operators-7ht5v\" (UID: \"f907e8ff-aa6a-44c2-a4ca-d73203442782\") " pod="openshift-marketplace/community-operators-7ht5v" Dec 12 16:12:15 crc kubenswrapper[4693]: I1212 16:12:15.626480 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f907e8ff-aa6a-44c2-a4ca-d73203442782-catalog-content\") pod \"community-operators-7ht5v\" (UID: \"f907e8ff-aa6a-44c2-a4ca-d73203442782\") " pod="openshift-marketplace/community-operators-7ht5v" Dec 12 16:12:15 crc kubenswrapper[4693]: I1212 16:12:15.626700 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f907e8ff-aa6a-44c2-a4ca-d73203442782-utilities\") pod \"community-operators-7ht5v\" (UID: \"f907e8ff-aa6a-44c2-a4ca-d73203442782\") " pod="openshift-marketplace/community-operators-7ht5v" Dec 12 16:12:15 crc kubenswrapper[4693]: I1212 16:12:15.678112 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hftc\" (UniqueName: \"kubernetes.io/projected/f907e8ff-aa6a-44c2-a4ca-d73203442782-kube-api-access-2hftc\") pod \"community-operators-7ht5v\" (UID: \"f907e8ff-aa6a-44c2-a4ca-d73203442782\") " pod="openshift-marketplace/community-operators-7ht5v" Dec 12 16:12:15 crc kubenswrapper[4693]: I1212 16:12:15.770144 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ht5v" Dec 12 16:12:16 crc kubenswrapper[4693]: I1212 16:12:16.608519 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:16 crc kubenswrapper[4693]: I1212 16:12:16.718592 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-drvh6"] Dec 12 16:12:16 crc kubenswrapper[4693]: I1212 16:12:16.736816 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-drvh6" podUID="0545ccf6-794f-4a89-912a-0e07df8534f6" containerName="dnsmasq-dns" containerID="cri-o://c5fbdf994d0ccfa3303aad6ff34a4ab6fc3442a4c6d81b9c39abd9ee873b0af4" gracePeriod=10 Dec 12 16:12:17 crc kubenswrapper[4693]: I1212 16:12:17.449727 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-drvh6" podUID="0545ccf6-794f-4a89-912a-0e07df8534f6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: connect: connection refused" Dec 12 16:12:17 crc kubenswrapper[4693]: I1212 16:12:17.773730 4693 generic.go:334] "Generic (PLEG): container finished" podID="0545ccf6-794f-4a89-912a-0e07df8534f6" containerID="c5fbdf994d0ccfa3303aad6ff34a4ab6fc3442a4c6d81b9c39abd9ee873b0af4" exitCode=0 Dec 12 16:12:17 crc kubenswrapper[4693]: I1212 16:12:17.773791 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-drvh6" event={"ID":"0545ccf6-794f-4a89-912a-0e07df8534f6","Type":"ContainerDied","Data":"c5fbdf994d0ccfa3303aad6ff34a4ab6fc3442a4c6d81b9c39abd9ee873b0af4"} Dec 12 16:12:19 crc kubenswrapper[4693]: I1212 16:12:19.393226 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r9p8t" Dec 12 16:12:19 crc kubenswrapper[4693]: I1212 16:12:19.544051 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/56fb0f10-fbce-4aed-9a10-7128021ce48f-db-sync-config-data\") pod \"56fb0f10-fbce-4aed-9a10-7128021ce48f\" (UID: \"56fb0f10-fbce-4aed-9a10-7128021ce48f\") " Dec 12 16:12:19 crc kubenswrapper[4693]: I1212 16:12:19.544140 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56fb0f10-fbce-4aed-9a10-7128021ce48f-config-data\") pod \"56fb0f10-fbce-4aed-9a10-7128021ce48f\" (UID: \"56fb0f10-fbce-4aed-9a10-7128021ce48f\") " Dec 12 16:12:19 crc kubenswrapper[4693]: I1212 16:12:19.544265 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptpvk\" (UniqueName: \"kubernetes.io/projected/56fb0f10-fbce-4aed-9a10-7128021ce48f-kube-api-access-ptpvk\") pod \"56fb0f10-fbce-4aed-9a10-7128021ce48f\" (UID: \"56fb0f10-fbce-4aed-9a10-7128021ce48f\") " Dec 12 16:12:19 crc kubenswrapper[4693]: I1212 16:12:19.544370 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56fb0f10-fbce-4aed-9a10-7128021ce48f-combined-ca-bundle\") pod \"56fb0f10-fbce-4aed-9a10-7128021ce48f\" (UID: \"56fb0f10-fbce-4aed-9a10-7128021ce48f\") " Dec 12 16:12:19 crc kubenswrapper[4693]: I1212 16:12:19.559838 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56fb0f10-fbce-4aed-9a10-7128021ce48f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "56fb0f10-fbce-4aed-9a10-7128021ce48f" (UID: "56fb0f10-fbce-4aed-9a10-7128021ce48f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:19 crc kubenswrapper[4693]: I1212 16:12:19.560037 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56fb0f10-fbce-4aed-9a10-7128021ce48f-kube-api-access-ptpvk" (OuterVolumeSpecName: "kube-api-access-ptpvk") pod "56fb0f10-fbce-4aed-9a10-7128021ce48f" (UID: "56fb0f10-fbce-4aed-9a10-7128021ce48f"). InnerVolumeSpecName "kube-api-access-ptpvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:12:19 crc kubenswrapper[4693]: I1212 16:12:19.583261 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56fb0f10-fbce-4aed-9a10-7128021ce48f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56fb0f10-fbce-4aed-9a10-7128021ce48f" (UID: "56fb0f10-fbce-4aed-9a10-7128021ce48f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:19 crc kubenswrapper[4693]: I1212 16:12:19.647839 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56fb0f10-fbce-4aed-9a10-7128021ce48f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:19 crc kubenswrapper[4693]: I1212 16:12:19.647867 4693 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/56fb0f10-fbce-4aed-9a10-7128021ce48f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:19 crc kubenswrapper[4693]: I1212 16:12:19.647901 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptpvk\" (UniqueName: \"kubernetes.io/projected/56fb0f10-fbce-4aed-9a10-7128021ce48f-kube-api-access-ptpvk\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:19 crc kubenswrapper[4693]: I1212 16:12:19.658006 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56fb0f10-fbce-4aed-9a10-7128021ce48f-config-data" (OuterVolumeSpecName: "config-data") pod "56fb0f10-fbce-4aed-9a10-7128021ce48f" (UID: "56fb0f10-fbce-4aed-9a10-7128021ce48f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:19 crc kubenswrapper[4693]: I1212 16:12:19.752819 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56fb0f10-fbce-4aed-9a10-7128021ce48f-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:19 crc kubenswrapper[4693]: I1212 16:12:19.810069 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r9p8t" event={"ID":"56fb0f10-fbce-4aed-9a10-7128021ce48f","Type":"ContainerDied","Data":"a66336a94509a7ea8c4cf6b20a3a9396b4da458f90da3fa68e975e4781d37e0e"} Dec 12 16:12:19 crc kubenswrapper[4693]: I1212 16:12:19.810387 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a66336a94509a7ea8c4cf6b20a3a9396b4da458f90da3fa68e975e4781d37e0e" Dec 12 16:12:19 crc kubenswrapper[4693]: I1212 16:12:19.810477 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r9p8t" Dec 12 16:12:20 crc kubenswrapper[4693]: I1212 16:12:20.832726 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-h6vgx"] Dec 12 16:12:20 crc kubenswrapper[4693]: E1212 16:12:20.833452 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56fb0f10-fbce-4aed-9a10-7128021ce48f" containerName="glance-db-sync" Dec 12 16:12:20 crc kubenswrapper[4693]: I1212 16:12:20.833465 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="56fb0f10-fbce-4aed-9a10-7128021ce48f" containerName="glance-db-sync" Dec 12 16:12:20 crc kubenswrapper[4693]: I1212 16:12:20.833686 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="56fb0f10-fbce-4aed-9a10-7128021ce48f" containerName="glance-db-sync" Dec 12 16:12:20 crc kubenswrapper[4693]: I1212 16:12:20.838840 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" Dec 12 16:12:20 crc kubenswrapper[4693]: I1212 16:12:20.898489 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-h6vgx"] Dec 12 16:12:20 crc kubenswrapper[4693]: I1212 16:12:20.982164 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-h6vgx\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" Dec 12 16:12:20 crc kubenswrapper[4693]: I1212 16:12:20.982449 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-config\") pod \"dnsmasq-dns-785d8bcb8c-h6vgx\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" Dec 12 16:12:20 crc kubenswrapper[4693]: I1212 16:12:20.982509 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-h6vgx\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" Dec 12 16:12:20 crc kubenswrapper[4693]: I1212 16:12:20.982607 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-h6vgx\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" Dec 12 16:12:20 crc kubenswrapper[4693]: I1212 16:12:20.982677 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-h6vgx\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" Dec 12 16:12:20 crc kubenswrapper[4693]: I1212 16:12:20.982702 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh29t\" (UniqueName: \"kubernetes.io/projected/701784f3-b9b0-4f46-82e5-c07ba162de29-kube-api-access-qh29t\") pod \"dnsmasq-dns-785d8bcb8c-h6vgx\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.085234 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-h6vgx\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.085418 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-config\") pod \"dnsmasq-dns-785d8bcb8c-h6vgx\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.085454 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-h6vgx\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.085501 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-h6vgx\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.085567 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-h6vgx\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.085601 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh29t\" (UniqueName: \"kubernetes.io/projected/701784f3-b9b0-4f46-82e5-c07ba162de29-kube-api-access-qh29t\") pod \"dnsmasq-dns-785d8bcb8c-h6vgx\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.086700 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-h6vgx\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.086799 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-h6vgx\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.087005 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-config\") pod \"dnsmasq-dns-785d8bcb8c-h6vgx\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.087072 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-h6vgx\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.087501 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-h6vgx\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.125102 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh29t\" (UniqueName: \"kubernetes.io/projected/701784f3-b9b0-4f46-82e5-c07ba162de29-kube-api-access-qh29t\") pod \"dnsmasq-dns-785d8bcb8c-h6vgx\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.189534 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.777427 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.779622 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.783340 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.783913 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2hqqj" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.784295 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.794390 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.910490 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cc41f5a-262d-4513-b064-c12dfea1625b-logs\") pod \"glance-default-external-api-0\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.910564 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cc41f5a-262d-4513-b064-c12dfea1625b-scripts\") pod \"glance-default-external-api-0\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.910620 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czvhl\" (UniqueName: \"kubernetes.io/projected/7cc41f5a-262d-4513-b064-c12dfea1625b-kube-api-access-czvhl\") pod \"glance-default-external-api-0\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.910737 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc41f5a-262d-4513-b064-c12dfea1625b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.910942 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cc41f5a-262d-4513-b064-c12dfea1625b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.911045 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\") pod \"glance-default-external-api-0\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:21 crc kubenswrapper[4693]: I1212 16:12:21.911075 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc41f5a-262d-4513-b064-c12dfea1625b-config-data\") pod \"glance-default-external-api-0\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.013925 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cc41f5a-262d-4513-b064-c12dfea1625b-logs\") pod \"glance-default-external-api-0\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.013983 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cc41f5a-262d-4513-b064-c12dfea1625b-scripts\") pod \"glance-default-external-api-0\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.014036 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czvhl\" (UniqueName: \"kubernetes.io/projected/7cc41f5a-262d-4513-b064-c12dfea1625b-kube-api-access-czvhl\") pod \"glance-default-external-api-0\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.014130 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc41f5a-262d-4513-b064-c12dfea1625b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.014192 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cc41f5a-262d-4513-b064-c12dfea1625b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.014256 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\") pod \"glance-default-external-api-0\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.014307 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc41f5a-262d-4513-b064-c12dfea1625b-config-data\") pod \"glance-default-external-api-0\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.015831 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cc41f5a-262d-4513-b064-c12dfea1625b-logs\") pod \"glance-default-external-api-0\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.017255 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cc41f5a-262d-4513-b064-c12dfea1625b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.017588 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.017614 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\") pod \"glance-default-external-api-0\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/45099b66416d1862b963c1d5cb75f06870617309ef888c0d9e553da7cdf42994/globalmount\"" pod="openstack/glance-default-external-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.019751 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc41f5a-262d-4513-b064-c12dfea1625b-config-data\") pod \"glance-default-external-api-0\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.035253 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc41f5a-262d-4513-b064-c12dfea1625b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.040288 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cc41f5a-262d-4513-b064-c12dfea1625b-scripts\") pod \"glance-default-external-api-0\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.043593 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czvhl\" (UniqueName: \"kubernetes.io/projected/7cc41f5a-262d-4513-b064-c12dfea1625b-kube-api-access-czvhl\") pod \"glance-default-external-api-0\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.084115 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\") pod \"glance-default-external-api-0\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.125674 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.128064 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.130734 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.141184 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.146566 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.219282 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96c548dd-1e5e-485e-b823-0ec52cacadf5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.219899 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c548dd-1e5e-485e-b823-0ec52cacadf5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.220050 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c548dd-1e5e-485e-b823-0ec52cacadf5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.220483 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96c548dd-1e5e-485e-b823-0ec52cacadf5-logs\") pod \"glance-default-internal-api-0\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.220748 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96c548dd-1e5e-485e-b823-0ec52cacadf5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.220861 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\") pod \"glance-default-internal-api-0\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.220925 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6dbj\" (UniqueName: \"kubernetes.io/projected/96c548dd-1e5e-485e-b823-0ec52cacadf5-kube-api-access-s6dbj\") pod \"glance-default-internal-api-0\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.324247 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96c548dd-1e5e-485e-b823-0ec52cacadf5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.324481 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c548dd-1e5e-485e-b823-0ec52cacadf5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.324588 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c548dd-1e5e-485e-b823-0ec52cacadf5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.324720 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96c548dd-1e5e-485e-b823-0ec52cacadf5-logs\") pod \"glance-default-internal-api-0\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.324826 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96c548dd-1e5e-485e-b823-0ec52cacadf5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.324897 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\") pod \"glance-default-internal-api-0\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.324942 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6dbj\" (UniqueName: \"kubernetes.io/projected/96c548dd-1e5e-485e-b823-0ec52cacadf5-kube-api-access-s6dbj\") pod \"glance-default-internal-api-0\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.337104 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96c548dd-1e5e-485e-b823-0ec52cacadf5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.337626 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96c548dd-1e5e-485e-b823-0ec52cacadf5-logs\") pod \"glance-default-internal-api-0\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.341031 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96c548dd-1e5e-485e-b823-0ec52cacadf5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.344252 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c548dd-1e5e-485e-b823-0ec52cacadf5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.345714 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.345768 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\") pod \"glance-default-internal-api-0\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/50f66c90a996e74d8c468f9c70fbfefea786aa10bcb017475eedefeec7499784/globalmount\"" pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.353559 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c548dd-1e5e-485e-b823-0ec52cacadf5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.397163 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6dbj\" (UniqueName: \"kubernetes.io/projected/96c548dd-1e5e-485e-b823-0ec52cacadf5-kube-api-access-s6dbj\") pod \"glance-default-internal-api-0\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.452184 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\") pod \"glance-default-internal-api-0\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:22 crc kubenswrapper[4693]: I1212 16:12:22.497025 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 12 16:12:24 crc kubenswrapper[4693]: I1212 16:12:24.030130 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 16:12:24 crc kubenswrapper[4693]: I1212 16:12:24.127560 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 16:12:27 crc kubenswrapper[4693]: I1212 16:12:27.450756 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-drvh6" podUID="0545ccf6-794f-4a89-912a-0e07df8534f6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: i/o timeout" Dec 12 16:12:27 crc kubenswrapper[4693]: I1212 16:12:27.923501 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4bwxm" Dec 12 16:12:27 crc kubenswrapper[4693]: I1212 16:12:27.938464 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4bwxm" event={"ID":"ae72b152-13a7-4392-a111-2a2d1db03a9e","Type":"ContainerDied","Data":"5850bfa12042fb400fcadad5e93fa6cb4f3139fa52ddd9164bea21325d093726"} Dec 12 16:12:27 crc kubenswrapper[4693]: I1212 16:12:27.938526 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5850bfa12042fb400fcadad5e93fa6cb4f3139fa52ddd9164bea21325d093726" Dec 12 16:12:27 crc kubenswrapper[4693]: I1212 16:12:27.938604 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4bwxm" Dec 12 16:12:27 crc kubenswrapper[4693]: I1212 16:12:27.985627 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-credential-keys\") pod \"ae72b152-13a7-4392-a111-2a2d1db03a9e\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " Dec 12 16:12:27 crc kubenswrapper[4693]: I1212 16:12:27.985714 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-config-data\") pod \"ae72b152-13a7-4392-a111-2a2d1db03a9e\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " Dec 12 16:12:27 crc kubenswrapper[4693]: I1212 16:12:27.985768 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-scripts\") pod \"ae72b152-13a7-4392-a111-2a2d1db03a9e\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " Dec 12 16:12:27 crc kubenswrapper[4693]: I1212 16:12:27.985872 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-combined-ca-bundle\") pod \"ae72b152-13a7-4392-a111-2a2d1db03a9e\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " Dec 12 16:12:27 crc kubenswrapper[4693]: I1212 16:12:27.985920 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rqn9\" (UniqueName: \"kubernetes.io/projected/ae72b152-13a7-4392-a111-2a2d1db03a9e-kube-api-access-2rqn9\") pod \"ae72b152-13a7-4392-a111-2a2d1db03a9e\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " Dec 12 16:12:27 crc kubenswrapper[4693]: I1212 16:12:27.986074 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-fernet-keys\") pod \"ae72b152-13a7-4392-a111-2a2d1db03a9e\" (UID: \"ae72b152-13a7-4392-a111-2a2d1db03a9e\") " Dec 12 16:12:27 crc kubenswrapper[4693]: I1212 16:12:27.995069 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ae72b152-13a7-4392-a111-2a2d1db03a9e" (UID: "ae72b152-13a7-4392-a111-2a2d1db03a9e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:27 crc kubenswrapper[4693]: I1212 16:12:27.995174 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ae72b152-13a7-4392-a111-2a2d1db03a9e" (UID: "ae72b152-13a7-4392-a111-2a2d1db03a9e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:28 crc kubenswrapper[4693]: I1212 16:12:27.997313 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-scripts" (OuterVolumeSpecName: "scripts") pod "ae72b152-13a7-4392-a111-2a2d1db03a9e" (UID: "ae72b152-13a7-4392-a111-2a2d1db03a9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:28 crc kubenswrapper[4693]: I1212 16:12:28.004468 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae72b152-13a7-4392-a111-2a2d1db03a9e-kube-api-access-2rqn9" (OuterVolumeSpecName: "kube-api-access-2rqn9") pod "ae72b152-13a7-4392-a111-2a2d1db03a9e" (UID: "ae72b152-13a7-4392-a111-2a2d1db03a9e"). InnerVolumeSpecName "kube-api-access-2rqn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:12:28 crc kubenswrapper[4693]: I1212 16:12:28.023825 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-config-data" (OuterVolumeSpecName: "config-data") pod "ae72b152-13a7-4392-a111-2a2d1db03a9e" (UID: "ae72b152-13a7-4392-a111-2a2d1db03a9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:28 crc kubenswrapper[4693]: I1212 16:12:28.024926 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae72b152-13a7-4392-a111-2a2d1db03a9e" (UID: "ae72b152-13a7-4392-a111-2a2d1db03a9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:28 crc kubenswrapper[4693]: I1212 16:12:28.089180 4693 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:28 crc kubenswrapper[4693]: I1212 16:12:28.089204 4693 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:28 crc kubenswrapper[4693]: I1212 16:12:28.089213 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:28 crc kubenswrapper[4693]: I1212 16:12:28.089222 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:28 crc kubenswrapper[4693]: I1212 16:12:28.089231 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae72b152-13a7-4392-a111-2a2d1db03a9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:28 crc kubenswrapper[4693]: I1212 16:12:28.089241 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rqn9\" (UniqueName: \"kubernetes.io/projected/ae72b152-13a7-4392-a111-2a2d1db03a9e-kube-api-access-2rqn9\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.001942 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4bwxm"] Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.011386 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4bwxm"] Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.114416 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8d2xs"] Dec 12 16:12:29 crc kubenswrapper[4693]: E1212 16:12:29.114912 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae72b152-13a7-4392-a111-2a2d1db03a9e" containerName="keystone-bootstrap" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.114932 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae72b152-13a7-4392-a111-2a2d1db03a9e" containerName="keystone-bootstrap" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.115425 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae72b152-13a7-4392-a111-2a2d1db03a9e" containerName="keystone-bootstrap" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.116141 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8d2xs" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.118494 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.121851 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.122152 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.122285 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.122847 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9xp5g" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.124804 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8d2xs"] Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.214087 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-combined-ca-bundle\") pod \"keystone-bootstrap-8d2xs\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " pod="openstack/keystone-bootstrap-8d2xs" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.214148 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-scripts\") pod \"keystone-bootstrap-8d2xs\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " pod="openstack/keystone-bootstrap-8d2xs" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.214185 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-config-data\") pod \"keystone-bootstrap-8d2xs\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " pod="openstack/keystone-bootstrap-8d2xs" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.214211 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-credential-keys\") pod \"keystone-bootstrap-8d2xs\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " pod="openstack/keystone-bootstrap-8d2xs" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.214232 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc9sc\" (UniqueName: \"kubernetes.io/projected/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-kube-api-access-gc9sc\") pod \"keystone-bootstrap-8d2xs\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " pod="openstack/keystone-bootstrap-8d2xs" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.214257 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-fernet-keys\") pod \"keystone-bootstrap-8d2xs\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " pod="openstack/keystone-bootstrap-8d2xs" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.316868 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-config-data\") pod \"keystone-bootstrap-8d2xs\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " pod="openstack/keystone-bootstrap-8d2xs" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.317186 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-credential-keys\") pod \"keystone-bootstrap-8d2xs\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " pod="openstack/keystone-bootstrap-8d2xs" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.317329 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc9sc\" (UniqueName: \"kubernetes.io/projected/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-kube-api-access-gc9sc\") pod \"keystone-bootstrap-8d2xs\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " pod="openstack/keystone-bootstrap-8d2xs" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.317442 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-fernet-keys\") pod \"keystone-bootstrap-8d2xs\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " pod="openstack/keystone-bootstrap-8d2xs" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.317703 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-combined-ca-bundle\") pod \"keystone-bootstrap-8d2xs\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " pod="openstack/keystone-bootstrap-8d2xs" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.317833 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-scripts\") pod \"keystone-bootstrap-8d2xs\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " pod="openstack/keystone-bootstrap-8d2xs" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.327041 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-scripts\") pod \"keystone-bootstrap-8d2xs\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " pod="openstack/keystone-bootstrap-8d2xs" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.331879 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-fernet-keys\") pod \"keystone-bootstrap-8d2xs\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " pod="openstack/keystone-bootstrap-8d2xs" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.333829 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-config-data\") pod \"keystone-bootstrap-8d2xs\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " pod="openstack/keystone-bootstrap-8d2xs" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.337960 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-combined-ca-bundle\") pod \"keystone-bootstrap-8d2xs\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " pod="openstack/keystone-bootstrap-8d2xs" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.348830 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-credential-keys\") pod \"keystone-bootstrap-8d2xs\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " pod="openstack/keystone-bootstrap-8d2xs" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.359863 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc9sc\" (UniqueName: \"kubernetes.io/projected/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-kube-api-access-gc9sc\") pod \"keystone-bootstrap-8d2xs\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " pod="openstack/keystone-bootstrap-8d2xs" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.434452 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae72b152-13a7-4392-a111-2a2d1db03a9e" path="/var/lib/kubelet/pods/ae72b152-13a7-4392-a111-2a2d1db03a9e/volumes" Dec 12 16:12:29 crc kubenswrapper[4693]: I1212 16:12:29.436476 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8d2xs" Dec 12 16:12:32 crc kubenswrapper[4693]: I1212 16:12:32.451782 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-drvh6" podUID="0545ccf6-794f-4a89-912a-0e07df8534f6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: i/o timeout" Dec 12 16:12:32 crc kubenswrapper[4693]: I1212 16:12:32.452635 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:12:37 crc kubenswrapper[4693]: I1212 16:12:37.452994 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-drvh6" podUID="0545ccf6-794f-4a89-912a-0e07df8534f6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: i/o timeout" Dec 12 16:12:39 crc kubenswrapper[4693]: I1212 16:12:39.045739 4693 generic.go:334] "Generic (PLEG): container finished" podID="a29594ee-7274-4690-a79d-ef6bd3a8a2fd" containerID="dbd2364c35e67e76146c6f002481a8873206dc1d6bada39b3c602172035c8c65" exitCode=0 Dec 12 16:12:39 crc kubenswrapper[4693]: I1212 16:12:39.045832 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-55c8q" event={"ID":"a29594ee-7274-4690-a79d-ef6bd3a8a2fd","Type":"ContainerDied","Data":"dbd2364c35e67e76146c6f002481a8873206dc1d6bada39b3c602172035c8c65"} Dec 12 16:12:40 crc kubenswrapper[4693]: E1212 16:12:40.248137 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Dec 12 16:12:40 crc kubenswrapper[4693]: E1212 16:12:40.248419 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bk66h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-pxq6t_openstack(1f6afc80-5a96-44ee-98c0-89a474913867): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:12:40 crc kubenswrapper[4693]: E1212 16:12:40.249751 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-pxq6t" podUID="1f6afc80-5a96-44ee-98c0-89a474913867" Dec 12 16:12:40 crc kubenswrapper[4693]: I1212 16:12:40.381081 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:12:40 crc kubenswrapper[4693]: I1212 16:12:40.507531 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-ovsdbserver-nb\") pod \"0545ccf6-794f-4a89-912a-0e07df8534f6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " Dec 12 16:12:40 crc kubenswrapper[4693]: I1212 16:12:40.508037 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-config\") pod \"0545ccf6-794f-4a89-912a-0e07df8534f6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " Dec 12 16:12:40 crc kubenswrapper[4693]: I1212 16:12:40.508157 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwnlw\" (UniqueName: \"kubernetes.io/projected/0545ccf6-794f-4a89-912a-0e07df8534f6-kube-api-access-nwnlw\") pod \"0545ccf6-794f-4a89-912a-0e07df8534f6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " Dec 12 16:12:40 crc kubenswrapper[4693]: I1212 16:12:40.508192 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-dns-swift-storage-0\") pod \"0545ccf6-794f-4a89-912a-0e07df8534f6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " Dec 12 16:12:40 crc kubenswrapper[4693]: I1212 16:12:40.508243 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-dns-svc\") pod \"0545ccf6-794f-4a89-912a-0e07df8534f6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " Dec 12 16:12:40 crc kubenswrapper[4693]: I1212 16:12:40.508295 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-ovsdbserver-sb\") pod \"0545ccf6-794f-4a89-912a-0e07df8534f6\" (UID: \"0545ccf6-794f-4a89-912a-0e07df8534f6\") " Dec 12 16:12:40 crc kubenswrapper[4693]: I1212 16:12:40.513970 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0545ccf6-794f-4a89-912a-0e07df8534f6-kube-api-access-nwnlw" (OuterVolumeSpecName: "kube-api-access-nwnlw") pod "0545ccf6-794f-4a89-912a-0e07df8534f6" (UID: "0545ccf6-794f-4a89-912a-0e07df8534f6"). InnerVolumeSpecName "kube-api-access-nwnlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:12:40 crc kubenswrapper[4693]: I1212 16:12:40.562834 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0545ccf6-794f-4a89-912a-0e07df8534f6" (UID: "0545ccf6-794f-4a89-912a-0e07df8534f6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:12:40 crc kubenswrapper[4693]: I1212 16:12:40.564661 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0545ccf6-794f-4a89-912a-0e07df8534f6" (UID: "0545ccf6-794f-4a89-912a-0e07df8534f6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:12:40 crc kubenswrapper[4693]: I1212 16:12:40.576786 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0545ccf6-794f-4a89-912a-0e07df8534f6" (UID: "0545ccf6-794f-4a89-912a-0e07df8534f6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:12:40 crc kubenswrapper[4693]: I1212 16:12:40.577715 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0545ccf6-794f-4a89-912a-0e07df8534f6" (UID: "0545ccf6-794f-4a89-912a-0e07df8534f6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:12:40 crc kubenswrapper[4693]: I1212 16:12:40.584143 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-config" (OuterVolumeSpecName: "config") pod "0545ccf6-794f-4a89-912a-0e07df8534f6" (UID: "0545ccf6-794f-4a89-912a-0e07df8534f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:12:40 crc kubenswrapper[4693]: I1212 16:12:40.613284 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:40 crc kubenswrapper[4693]: I1212 16:12:40.613328 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwnlw\" (UniqueName: \"kubernetes.io/projected/0545ccf6-794f-4a89-912a-0e07df8534f6-kube-api-access-nwnlw\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:40 crc kubenswrapper[4693]: I1212 16:12:40.613345 4693 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:40 crc kubenswrapper[4693]: I1212 16:12:40.613360 4693 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:40 crc kubenswrapper[4693]: I1212 16:12:40.613381 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:40 crc kubenswrapper[4693]: I1212 16:12:40.613392 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0545ccf6-794f-4a89-912a-0e07df8534f6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:40 crc kubenswrapper[4693]: E1212 16:12:40.750176 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Dec 12 16:12:40 crc kubenswrapper[4693]: E1212 16:12:40.750344 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gw8hp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-wrwhd_openstack(f9bc2e28-21b6-42e1-a680-92426ae37ecf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:12:40 crc kubenswrapper[4693]: E1212 16:12:40.752500 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-wrwhd" podUID="f9bc2e28-21b6-42e1-a680-92426ae37ecf" Dec 12 16:12:41 crc kubenswrapper[4693]: I1212 16:12:41.075410 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-drvh6" event={"ID":"0545ccf6-794f-4a89-912a-0e07df8534f6","Type":"ContainerDied","Data":"0b80d3a2d30994fd7ad361c807b8f1ad8fcbeb67c3fd917a4639805371aad4d1"} Dec 12 16:12:41 crc kubenswrapper[4693]: I1212 16:12:41.075597 4693 scope.go:117] "RemoveContainer" containerID="c5fbdf994d0ccfa3303aad6ff34a4ab6fc3442a4c6d81b9c39abd9ee873b0af4" Dec 12 16:12:41 crc kubenswrapper[4693]: I1212 16:12:41.076078 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-drvh6" Dec 12 16:12:41 crc kubenswrapper[4693]: E1212 16:12:41.077652 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-wrwhd" podUID="f9bc2e28-21b6-42e1-a680-92426ae37ecf" Dec 12 16:12:41 crc kubenswrapper[4693]: E1212 16:12:41.079843 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-pxq6t" podUID="1f6afc80-5a96-44ee-98c0-89a474913867" Dec 12 16:12:41 crc kubenswrapper[4693]: I1212 16:12:41.165758 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-drvh6"] Dec 12 16:12:41 crc kubenswrapper[4693]: I1212 16:12:41.176084 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-drvh6"] Dec 12 16:12:41 crc kubenswrapper[4693]: E1212 16:12:41.276565 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 12 16:12:41 crc kubenswrapper[4693]: E1212 16:12:41.276758 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5ch5ddh57dh5cbh56dh59ch5d6h64fh4h5c4hc7h697hd6hfdhfdh97h64h657hf6hc5h667h5ddh649h5dch59fh54dh549h67h5d7h5b9hf9h7dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gn7tp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(fe4e1190-a9f2-4010-98d3-a41898274b56): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:12:41 crc kubenswrapper[4693]: I1212 16:12:41.375285 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0545ccf6-794f-4a89-912a-0e07df8534f6" path="/var/lib/kubelet/pods/0545ccf6-794f-4a89-912a-0e07df8534f6/volumes" Dec 12 16:12:42 crc kubenswrapper[4693]: I1212 16:12:42.454476 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-drvh6" podUID="0545ccf6-794f-4a89-912a-0e07df8534f6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: i/o timeout" Dec 12 16:12:42 crc kubenswrapper[4693]: I1212 16:12:42.530697 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:12:42 crc kubenswrapper[4693]: I1212 16:12:42.531182 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:12:44 crc kubenswrapper[4693]: E1212 16:12:44.517827 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 12 16:12:44 crc kubenswrapper[4693]: E1212 16:12:44.518079 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vv74m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-6fb94_openstack(42ae7c15-9f4d-4ef8-83d7-279226e74846): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:12:44 crc kubenswrapper[4693]: E1212 16:12:44.520038 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-6fb94" podUID="42ae7c15-9f4d-4ef8-83d7-279226e74846" Dec 12 16:12:44 crc kubenswrapper[4693]: I1212 16:12:44.565043 4693 scope.go:117] "RemoveContainer" containerID="15bc37e8526176158fc8e414dfce95a31933555384df2cfd235965896ff0ac83" Dec 12 16:12:44 crc kubenswrapper[4693]: I1212 16:12:44.720355 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-55c8q" Dec 12 16:12:44 crc kubenswrapper[4693]: I1212 16:12:44.910809 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrv2s\" (UniqueName: \"kubernetes.io/projected/a29594ee-7274-4690-a79d-ef6bd3a8a2fd-kube-api-access-jrv2s\") pod \"a29594ee-7274-4690-a79d-ef6bd3a8a2fd\" (UID: \"a29594ee-7274-4690-a79d-ef6bd3a8a2fd\") " Dec 12 16:12:44 crc kubenswrapper[4693]: I1212 16:12:44.910937 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a29594ee-7274-4690-a79d-ef6bd3a8a2fd-combined-ca-bundle\") pod \"a29594ee-7274-4690-a79d-ef6bd3a8a2fd\" (UID: \"a29594ee-7274-4690-a79d-ef6bd3a8a2fd\") " Dec 12 16:12:44 crc kubenswrapper[4693]: I1212 16:12:44.911037 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a29594ee-7274-4690-a79d-ef6bd3a8a2fd-config\") pod \"a29594ee-7274-4690-a79d-ef6bd3a8a2fd\" (UID: \"a29594ee-7274-4690-a79d-ef6bd3a8a2fd\") " Dec 12 16:12:44 crc kubenswrapper[4693]: I1212 16:12:44.917141 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a29594ee-7274-4690-a79d-ef6bd3a8a2fd-kube-api-access-jrv2s" (OuterVolumeSpecName: "kube-api-access-jrv2s") pod "a29594ee-7274-4690-a79d-ef6bd3a8a2fd" (UID: "a29594ee-7274-4690-a79d-ef6bd3a8a2fd"). InnerVolumeSpecName "kube-api-access-jrv2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:12:44 crc kubenswrapper[4693]: I1212 16:12:44.967719 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a29594ee-7274-4690-a79d-ef6bd3a8a2fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a29594ee-7274-4690-a79d-ef6bd3a8a2fd" (UID: "a29594ee-7274-4690-a79d-ef6bd3a8a2fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:44 crc kubenswrapper[4693]: I1212 16:12:44.972725 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a29594ee-7274-4690-a79d-ef6bd3a8a2fd-config" (OuterVolumeSpecName: "config") pod "a29594ee-7274-4690-a79d-ef6bd3a8a2fd" (UID: "a29594ee-7274-4690-a79d-ef6bd3a8a2fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:45 crc kubenswrapper[4693]: I1212 16:12:45.014069 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a29594ee-7274-4690-a79d-ef6bd3a8a2fd-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:45 crc kubenswrapper[4693]: I1212 16:12:45.014106 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrv2s\" (UniqueName: \"kubernetes.io/projected/a29594ee-7274-4690-a79d-ef6bd3a8a2fd-kube-api-access-jrv2s\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:45 crc kubenswrapper[4693]: I1212 16:12:45.014123 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a29594ee-7274-4690-a79d-ef6bd3a8a2fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:45 crc kubenswrapper[4693]: I1212 16:12:45.128662 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7ht5v"] Dec 12 16:12:45 crc kubenswrapper[4693]: I1212 16:12:45.136717 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-55c8q" event={"ID":"a29594ee-7274-4690-a79d-ef6bd3a8a2fd","Type":"ContainerDied","Data":"076f17cf3b02b7b44ca389611a3d5b64e2af7fb5329c6b547840f004fc784e72"} Dec 12 16:12:45 crc kubenswrapper[4693]: I1212 16:12:45.136745 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-55c8q" Dec 12 16:12:45 crc kubenswrapper[4693]: I1212 16:12:45.136755 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="076f17cf3b02b7b44ca389611a3d5b64e2af7fb5329c6b547840f004fc784e72" Dec 12 16:12:45 crc kubenswrapper[4693]: I1212 16:12:45.143797 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-26nbh" event={"ID":"9560987d-9cb8-4361-9c44-3be630e46634","Type":"ContainerStarted","Data":"f22fb99ecf781d897c8de73974db44be5f6cb5c0848f6f592e99eff8c1281886"} Dec 12 16:12:45 crc kubenswrapper[4693]: E1212 16:12:45.144840 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-6fb94" podUID="42ae7c15-9f4d-4ef8-83d7-279226e74846" Dec 12 16:12:45 crc kubenswrapper[4693]: I1212 16:12:45.211355 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-26nbh" podStartSLOduration=2.400519031 podStartE2EDuration="39.21133405s" podCreationTimestamp="2025-12-12 16:12:06 +0000 UTC" firstStartedPulling="2025-12-12 16:12:07.70380669 +0000 UTC m=+1554.872446291" lastFinishedPulling="2025-12-12 16:12:44.514621709 +0000 UTC m=+1591.683261310" observedRunningTime="2025-12-12 16:12:45.172765363 +0000 UTC m=+1592.341404964" watchObservedRunningTime="2025-12-12 16:12:45.21133405 +0000 UTC m=+1592.379973651" Dec 12 16:12:45 crc kubenswrapper[4693]: W1212 16:12:45.340699 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf907e8ff_aa6a_44c2_a4ca_d73203442782.slice/crio-cdb6836c7ae6f36a575d6b20371771ba557682427d178da350269e1ff779657e WatchSource:0}: Error finding container cdb6836c7ae6f36a575d6b20371771ba557682427d178da350269e1ff779657e: Status 404 returned error can't find the container with id cdb6836c7ae6f36a575d6b20371771ba557682427d178da350269e1ff779657e Dec 12 16:12:45 crc kubenswrapper[4693]: I1212 16:12:45.466458 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8d2xs"] Dec 12 16:12:45 crc kubenswrapper[4693]: I1212 16:12:45.478078 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-h6vgx"] Dec 12 16:12:45 crc kubenswrapper[4693]: W1212 16:12:45.478238 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda85781c9_4fb2_4c98_802d_d9fa60ff72e0.slice/crio-5980c14a286a5c91786f71579d28f02bc9ec00144ae02039f513f8c33aaed198 WatchSource:0}: Error finding container 5980c14a286a5c91786f71579d28f02bc9ec00144ae02039f513f8c33aaed198: Status 404 returned error can't find the container with id 5980c14a286a5c91786f71579d28f02bc9ec00144ae02039f513f8c33aaed198 Dec 12 16:12:45 crc kubenswrapper[4693]: I1212 16:12:45.553256 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 16:12:45 crc kubenswrapper[4693]: W1212 16:12:45.576555 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96c548dd_1e5e_485e_b823_0ec52cacadf5.slice/crio-e5418fb0e4331133d017ac5b5b9d15899615ffe658f95913b1e15db4ab4c1875 WatchSource:0}: Error finding container e5418fb0e4331133d017ac5b5b9d15899615ffe658f95913b1e15db4ab4c1875: Status 404 returned error can't find the container with id e5418fb0e4331133d017ac5b5b9d15899615ffe658f95913b1e15db4ab4c1875 Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.150218 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-h6vgx"] Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.228726 4693 generic.go:334] "Generic (PLEG): container finished" podID="f907e8ff-aa6a-44c2-a4ca-d73203442782" containerID="1137ef1ce82c31ef921c3e4a996550c3fdf9f24077ba9efdc47d8224595543c2" exitCode=0 Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.229039 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ht5v" event={"ID":"f907e8ff-aa6a-44c2-a4ca-d73203442782","Type":"ContainerDied","Data":"1137ef1ce82c31ef921c3e4a996550c3fdf9f24077ba9efdc47d8224595543c2"} Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.229096 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ht5v" event={"ID":"f907e8ff-aa6a-44c2-a4ca-d73203442782","Type":"ContainerStarted","Data":"cdb6836c7ae6f36a575d6b20371771ba557682427d178da350269e1ff779657e"} Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.257905 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8d2xs" event={"ID":"a85781c9-4fb2-4c98-802d-d9fa60ff72e0","Type":"ContainerStarted","Data":"318c91efc08fa6feccde5468ecdf5ab4d6c67c2147e6f7014d182371342bbd32"} Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.257941 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8d2xs" event={"ID":"a85781c9-4fb2-4c98-802d-d9fa60ff72e0","Type":"ContainerStarted","Data":"5980c14a286a5c91786f71579d28f02bc9ec00144ae02039f513f8c33aaed198"} Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.260507 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-kcgxg"] Dec 12 16:12:46 crc kubenswrapper[4693]: E1212 16:12:46.261052 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a29594ee-7274-4690-a79d-ef6bd3a8a2fd" containerName="neutron-db-sync" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.261122 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="a29594ee-7274-4690-a79d-ef6bd3a8a2fd" containerName="neutron-db-sync" Dec 12 16:12:46 crc kubenswrapper[4693]: E1212 16:12:46.261208 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0545ccf6-794f-4a89-912a-0e07df8534f6" containerName="dnsmasq-dns" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.261256 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="0545ccf6-794f-4a89-912a-0e07df8534f6" containerName="dnsmasq-dns" Dec 12 16:12:46 crc kubenswrapper[4693]: E1212 16:12:46.261327 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0545ccf6-794f-4a89-912a-0e07df8534f6" containerName="init" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.261386 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="0545ccf6-794f-4a89-912a-0e07df8534f6" containerName="init" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.261636 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="0545ccf6-794f-4a89-912a-0e07df8534f6" containerName="dnsmasq-dns" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.261742 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="a29594ee-7274-4690-a79d-ef6bd3a8a2fd" containerName="neutron-db-sync" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.262920 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.270588 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe4e1190-a9f2-4010-98d3-a41898274b56","Type":"ContainerStarted","Data":"97ec1dfe60655abfc2cb4a1695a1e9c983e0eec6b8ae1292c1c9dc6f39857180"} Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.306337 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-kcgxg"] Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.337389 4693 generic.go:334] "Generic (PLEG): container finished" podID="701784f3-b9b0-4f46-82e5-c07ba162de29" containerID="96f4daa61e2214bfd118d8082d17780acc1132a9b7c7eec297b15e2cb396c2cf" exitCode=0 Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.337497 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" event={"ID":"701784f3-b9b0-4f46-82e5-c07ba162de29","Type":"ContainerDied","Data":"96f4daa61e2214bfd118d8082d17780acc1132a9b7c7eec297b15e2cb396c2cf"} Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.337554 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" event={"ID":"701784f3-b9b0-4f46-82e5-c07ba162de29","Type":"ContainerStarted","Data":"480f2c31068f1393f5b2a6462c8ee553e5d0fee1889775033a93ba8d5821952c"} Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.352921 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"96c548dd-1e5e-485e-b823-0ec52cacadf5","Type":"ContainerStarted","Data":"e5418fb0e4331133d017ac5b5b9d15899615ffe658f95913b1e15db4ab4c1875"} Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.380762 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-config\") pod \"dnsmasq-dns-55f844cf75-kcgxg\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.380841 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-kcgxg\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.380922 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-kcgxg\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.380949 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-kcgxg\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.381042 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz9hn\" (UniqueName: \"kubernetes.io/projected/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-kube-api-access-wz9hn\") pod \"dnsmasq-dns-55f844cf75-kcgxg\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.381100 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-dns-svc\") pod \"dnsmasq-dns-55f844cf75-kcgxg\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.498699 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-config\") pod \"dnsmasq-dns-55f844cf75-kcgxg\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.503361 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-kcgxg\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.503586 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-kcgxg\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.503657 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-kcgxg\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.503845 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz9hn\" (UniqueName: \"kubernetes.io/projected/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-kube-api-access-wz9hn\") pod \"dnsmasq-dns-55f844cf75-kcgxg\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.504019 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-dns-svc\") pod \"dnsmasq-dns-55f844cf75-kcgxg\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.510799 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-kcgxg\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.502108 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-config\") pod \"dnsmasq-dns-55f844cf75-kcgxg\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.517732 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-kcgxg\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.520971 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-kcgxg\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.522226 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-dns-svc\") pod \"dnsmasq-dns-55f844cf75-kcgxg\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.560075 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.572447 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz9hn\" (UniqueName: \"kubernetes.io/projected/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-kube-api-access-wz9hn\") pod \"dnsmasq-dns-55f844cf75-kcgxg\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.655337 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.657358 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-68cb54fffb-t6qwm"] Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.663351 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68cb54fffb-t6qwm" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.666428 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.666789 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xm9jc" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.666811 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.671346 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.720614 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwxrb\" (UniqueName: \"kubernetes.io/projected/d5518acc-0be1-4b59-874d-61eeb018a534-kube-api-access-qwxrb\") pod \"neutron-68cb54fffb-t6qwm\" (UID: \"d5518acc-0be1-4b59-874d-61eeb018a534\") " pod="openstack/neutron-68cb54fffb-t6qwm" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.721214 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-ovndb-tls-certs\") pod \"neutron-68cb54fffb-t6qwm\" (UID: \"d5518acc-0be1-4b59-874d-61eeb018a534\") " pod="openstack/neutron-68cb54fffb-t6qwm" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.721341 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-combined-ca-bundle\") pod \"neutron-68cb54fffb-t6qwm\" (UID: \"d5518acc-0be1-4b59-874d-61eeb018a534\") " pod="openstack/neutron-68cb54fffb-t6qwm" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.721532 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-httpd-config\") pod \"neutron-68cb54fffb-t6qwm\" (UID: \"d5518acc-0be1-4b59-874d-61eeb018a534\") " pod="openstack/neutron-68cb54fffb-t6qwm" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.721579 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-config\") pod \"neutron-68cb54fffb-t6qwm\" (UID: \"d5518acc-0be1-4b59-874d-61eeb018a534\") " pod="openstack/neutron-68cb54fffb-t6qwm" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.728016 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68cb54fffb-t6qwm"] Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.732046 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8d2xs" podStartSLOduration=17.732029327 podStartE2EDuration="17.732029327s" podCreationTimestamp="2025-12-12 16:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:12:46.405310588 +0000 UTC m=+1593.573950189" watchObservedRunningTime="2025-12-12 16:12:46.732029327 +0000 UTC m=+1593.900668928" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.837624 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-httpd-config\") pod \"neutron-68cb54fffb-t6qwm\" (UID: \"d5518acc-0be1-4b59-874d-61eeb018a534\") " pod="openstack/neutron-68cb54fffb-t6qwm" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.837748 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-config\") pod \"neutron-68cb54fffb-t6qwm\" (UID: \"d5518acc-0be1-4b59-874d-61eeb018a534\") " pod="openstack/neutron-68cb54fffb-t6qwm" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.847080 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwxrb\" (UniqueName: \"kubernetes.io/projected/d5518acc-0be1-4b59-874d-61eeb018a534-kube-api-access-qwxrb\") pod \"neutron-68cb54fffb-t6qwm\" (UID: \"d5518acc-0be1-4b59-874d-61eeb018a534\") " pod="openstack/neutron-68cb54fffb-t6qwm" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.847178 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-ovndb-tls-certs\") pod \"neutron-68cb54fffb-t6qwm\" (UID: \"d5518acc-0be1-4b59-874d-61eeb018a534\") " pod="openstack/neutron-68cb54fffb-t6qwm" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.847439 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-combined-ca-bundle\") pod \"neutron-68cb54fffb-t6qwm\" (UID: \"d5518acc-0be1-4b59-874d-61eeb018a534\") " pod="openstack/neutron-68cb54fffb-t6qwm" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.852686 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-config\") pod \"neutron-68cb54fffb-t6qwm\" (UID: \"d5518acc-0be1-4b59-874d-61eeb018a534\") " pod="openstack/neutron-68cb54fffb-t6qwm" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.854082 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-httpd-config\") pod \"neutron-68cb54fffb-t6qwm\" (UID: \"d5518acc-0be1-4b59-874d-61eeb018a534\") " pod="openstack/neutron-68cb54fffb-t6qwm" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.861343 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-ovndb-tls-certs\") pod \"neutron-68cb54fffb-t6qwm\" (UID: \"d5518acc-0be1-4b59-874d-61eeb018a534\") " pod="openstack/neutron-68cb54fffb-t6qwm" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.870308 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-combined-ca-bundle\") pod \"neutron-68cb54fffb-t6qwm\" (UID: \"d5518acc-0be1-4b59-874d-61eeb018a534\") " pod="openstack/neutron-68cb54fffb-t6qwm" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.887455 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwxrb\" (UniqueName: \"kubernetes.io/projected/d5518acc-0be1-4b59-874d-61eeb018a534-kube-api-access-qwxrb\") pod \"neutron-68cb54fffb-t6qwm\" (UID: \"d5518acc-0be1-4b59-874d-61eeb018a534\") " pod="openstack/neutron-68cb54fffb-t6qwm" Dec 12 16:12:46 crc kubenswrapper[4693]: I1212 16:12:46.970173 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68cb54fffb-t6qwm" Dec 12 16:12:47 crc kubenswrapper[4693]: E1212 16:12:47.111960 4693 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Dec 12 16:12:47 crc kubenswrapper[4693]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/701784f3-b9b0-4f46-82e5-c07ba162de29/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 12 16:12:47 crc kubenswrapper[4693]: > podSandboxID="480f2c31068f1393f5b2a6462c8ee553e5d0fee1889775033a93ba8d5821952c" Dec 12 16:12:47 crc kubenswrapper[4693]: E1212 16:12:47.112698 4693 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 12 16:12:47 crc kubenswrapper[4693]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n86h549h66h5b5h5dfh587h56bh555h586h5h67fh584h665h5f8h689h64dh58fhf6hd9h648h5bfhcfh55fh696h7bh55fh5f8h65h5dh57h645h596q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qh29t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-785d8bcb8c-h6vgx_openstack(701784f3-b9b0-4f46-82e5-c07ba162de29): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/701784f3-b9b0-4f46-82e5-c07ba162de29/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 12 16:12:47 crc kubenswrapper[4693]: > logger="UnhandledError" Dec 12 16:12:47 crc kubenswrapper[4693]: E1212 16:12:47.113868 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/701784f3-b9b0-4f46-82e5-c07ba162de29/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" podUID="701784f3-b9b0-4f46-82e5-c07ba162de29" Dec 12 16:12:47 crc kubenswrapper[4693]: I1212 16:12:47.432620 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"96c548dd-1e5e-485e-b823-0ec52cacadf5","Type":"ContainerStarted","Data":"a3c9edb7745dc68fb8ac71aff4bece0dd3c7da70a8c1b5190ecb337ee5105cae"} Dec 12 16:12:47 crc kubenswrapper[4693]: I1212 16:12:47.432953 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7cc41f5a-262d-4513-b064-c12dfea1625b","Type":"ContainerStarted","Data":"819425a25b4684dd594d7f5afc383aab3f6c2a762b127547c52a9d3310274c69"} Dec 12 16:12:47 crc kubenswrapper[4693]: I1212 16:12:47.529323 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-kcgxg"] Dec 12 16:12:47 crc kubenswrapper[4693]: W1212 16:12:47.534636 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7f0e3f3_2cff_44bd_9f42_9736dc1947bb.slice/crio-3f9245c51a24ac5b01d3b8ce19467edb1e379cfd42cffec51c4d7b7f4c3a6789 WatchSource:0}: Error finding container 3f9245c51a24ac5b01d3b8ce19467edb1e379cfd42cffec51c4d7b7f4c3a6789: Status 404 returned error can't find the container with id 3f9245c51a24ac5b01d3b8ce19467edb1e379cfd42cffec51c4d7b7f4c3a6789 Dec 12 16:12:47 crc kubenswrapper[4693]: I1212 16:12:47.945854 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68cb54fffb-t6qwm"] Dec 12 16:12:47 crc kubenswrapper[4693]: W1212 16:12:47.971529 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5518acc_0be1_4b59_874d_61eeb018a534.slice/crio-a92cf39e7ee3c6946df5263b8fd9a5ef46303dc5dd58e9d95782239103f70ab9 WatchSource:0}: Error finding container a92cf39e7ee3c6946df5263b8fd9a5ef46303dc5dd58e9d95782239103f70ab9: Status 404 returned error can't find the container with id a92cf39e7ee3c6946df5263b8fd9a5ef46303dc5dd58e9d95782239103f70ab9 Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.014025 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.110005 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-config\") pod \"701784f3-b9b0-4f46-82e5-c07ba162de29\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.110188 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh29t\" (UniqueName: \"kubernetes.io/projected/701784f3-b9b0-4f46-82e5-c07ba162de29-kube-api-access-qh29t\") pod \"701784f3-b9b0-4f46-82e5-c07ba162de29\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.110345 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-dns-svc\") pod \"701784f3-b9b0-4f46-82e5-c07ba162de29\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.110383 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-ovsdbserver-nb\") pod \"701784f3-b9b0-4f46-82e5-c07ba162de29\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.110406 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-ovsdbserver-sb\") pod \"701784f3-b9b0-4f46-82e5-c07ba162de29\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.110462 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-dns-swift-storage-0\") pod \"701784f3-b9b0-4f46-82e5-c07ba162de29\" (UID: \"701784f3-b9b0-4f46-82e5-c07ba162de29\") " Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.130186 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/701784f3-b9b0-4f46-82e5-c07ba162de29-kube-api-access-qh29t" (OuterVolumeSpecName: "kube-api-access-qh29t") pod "701784f3-b9b0-4f46-82e5-c07ba162de29" (UID: "701784f3-b9b0-4f46-82e5-c07ba162de29"). InnerVolumeSpecName "kube-api-access-qh29t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.214337 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qh29t\" (UniqueName: \"kubernetes.io/projected/701784f3-b9b0-4f46-82e5-c07ba162de29-kube-api-access-qh29t\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.353020 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "701784f3-b9b0-4f46-82e5-c07ba162de29" (UID: "701784f3-b9b0-4f46-82e5-c07ba162de29"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.389629 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "701784f3-b9b0-4f46-82e5-c07ba162de29" (UID: "701784f3-b9b0-4f46-82e5-c07ba162de29"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.391922 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "701784f3-b9b0-4f46-82e5-c07ba162de29" (UID: "701784f3-b9b0-4f46-82e5-c07ba162de29"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.399783 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-config" (OuterVolumeSpecName: "config") pod "701784f3-b9b0-4f46-82e5-c07ba162de29" (UID: "701784f3-b9b0-4f46-82e5-c07ba162de29"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.406581 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "701784f3-b9b0-4f46-82e5-c07ba162de29" (UID: "701784f3-b9b0-4f46-82e5-c07ba162de29"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.420063 4693 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.420386 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.420470 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.420567 4693 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.420712 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/701784f3-b9b0-4f46-82e5-c07ba162de29-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.443880 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68cb54fffb-t6qwm" event={"ID":"d5518acc-0be1-4b59-874d-61eeb018a534","Type":"ContainerStarted","Data":"a92cf39e7ee3c6946df5263b8fd9a5ef46303dc5dd58e9d95782239103f70ab9"} Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.447581 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" event={"ID":"701784f3-b9b0-4f46-82e5-c07ba162de29","Type":"ContainerDied","Data":"480f2c31068f1393f5b2a6462c8ee553e5d0fee1889775033a93ba8d5821952c"} Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.447642 4693 scope.go:117] "RemoveContainer" containerID="96f4daa61e2214bfd118d8082d17780acc1132a9b7c7eec297b15e2cb396c2cf" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.447638 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-h6vgx" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.451477 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" event={"ID":"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb","Type":"ContainerStarted","Data":"3f9245c51a24ac5b01d3b8ce19467edb1e379cfd42cffec51c4d7b7f4c3a6789"} Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.533770 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-h6vgx"] Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.551694 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-h6vgx"] Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.631621 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5788c966cf-j7w8t"] Dec 12 16:12:48 crc kubenswrapper[4693]: E1212 16:12:48.632161 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="701784f3-b9b0-4f46-82e5-c07ba162de29" containerName="init" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.632175 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="701784f3-b9b0-4f46-82e5-c07ba162de29" containerName="init" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.632397 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="701784f3-b9b0-4f46-82e5-c07ba162de29" containerName="init" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.633819 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.653971 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5788c966cf-j7w8t"] Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.660428 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.660810 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.736541 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/497a9259-1ec1-47b8-b93e-820de80b527a-internal-tls-certs\") pod \"neutron-5788c966cf-j7w8t\" (UID: \"497a9259-1ec1-47b8-b93e-820de80b527a\") " pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.736596 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/497a9259-1ec1-47b8-b93e-820de80b527a-httpd-config\") pod \"neutron-5788c966cf-j7w8t\" (UID: \"497a9259-1ec1-47b8-b93e-820de80b527a\") " pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.736618 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nlck\" (UniqueName: \"kubernetes.io/projected/497a9259-1ec1-47b8-b93e-820de80b527a-kube-api-access-9nlck\") pod \"neutron-5788c966cf-j7w8t\" (UID: \"497a9259-1ec1-47b8-b93e-820de80b527a\") " pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.736685 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/497a9259-1ec1-47b8-b93e-820de80b527a-ovndb-tls-certs\") pod \"neutron-5788c966cf-j7w8t\" (UID: \"497a9259-1ec1-47b8-b93e-820de80b527a\") " pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.736734 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497a9259-1ec1-47b8-b93e-820de80b527a-combined-ca-bundle\") pod \"neutron-5788c966cf-j7w8t\" (UID: \"497a9259-1ec1-47b8-b93e-820de80b527a\") " pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.736793 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/497a9259-1ec1-47b8-b93e-820de80b527a-config\") pod \"neutron-5788c966cf-j7w8t\" (UID: \"497a9259-1ec1-47b8-b93e-820de80b527a\") " pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.736831 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/497a9259-1ec1-47b8-b93e-820de80b527a-public-tls-certs\") pod \"neutron-5788c966cf-j7w8t\" (UID: \"497a9259-1ec1-47b8-b93e-820de80b527a\") " pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.838890 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497a9259-1ec1-47b8-b93e-820de80b527a-combined-ca-bundle\") pod \"neutron-5788c966cf-j7w8t\" (UID: \"497a9259-1ec1-47b8-b93e-820de80b527a\") " pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.839081 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/497a9259-1ec1-47b8-b93e-820de80b527a-config\") pod \"neutron-5788c966cf-j7w8t\" (UID: \"497a9259-1ec1-47b8-b93e-820de80b527a\") " pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.839147 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/497a9259-1ec1-47b8-b93e-820de80b527a-public-tls-certs\") pod \"neutron-5788c966cf-j7w8t\" (UID: \"497a9259-1ec1-47b8-b93e-820de80b527a\") " pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.839323 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/497a9259-1ec1-47b8-b93e-820de80b527a-internal-tls-certs\") pod \"neutron-5788c966cf-j7w8t\" (UID: \"497a9259-1ec1-47b8-b93e-820de80b527a\") " pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.839394 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/497a9259-1ec1-47b8-b93e-820de80b527a-httpd-config\") pod \"neutron-5788c966cf-j7w8t\" (UID: \"497a9259-1ec1-47b8-b93e-820de80b527a\") " pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.839419 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nlck\" (UniqueName: \"kubernetes.io/projected/497a9259-1ec1-47b8-b93e-820de80b527a-kube-api-access-9nlck\") pod \"neutron-5788c966cf-j7w8t\" (UID: \"497a9259-1ec1-47b8-b93e-820de80b527a\") " pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.839535 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/497a9259-1ec1-47b8-b93e-820de80b527a-ovndb-tls-certs\") pod \"neutron-5788c966cf-j7w8t\" (UID: \"497a9259-1ec1-47b8-b93e-820de80b527a\") " pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.849875 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/497a9259-1ec1-47b8-b93e-820de80b527a-internal-tls-certs\") pod \"neutron-5788c966cf-j7w8t\" (UID: \"497a9259-1ec1-47b8-b93e-820de80b527a\") " pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.850039 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/497a9259-1ec1-47b8-b93e-820de80b527a-ovndb-tls-certs\") pod \"neutron-5788c966cf-j7w8t\" (UID: \"497a9259-1ec1-47b8-b93e-820de80b527a\") " pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.851812 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497a9259-1ec1-47b8-b93e-820de80b527a-combined-ca-bundle\") pod \"neutron-5788c966cf-j7w8t\" (UID: \"497a9259-1ec1-47b8-b93e-820de80b527a\") " pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.856027 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/497a9259-1ec1-47b8-b93e-820de80b527a-public-tls-certs\") pod \"neutron-5788c966cf-j7w8t\" (UID: \"497a9259-1ec1-47b8-b93e-820de80b527a\") " pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.857859 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/497a9259-1ec1-47b8-b93e-820de80b527a-httpd-config\") pod \"neutron-5788c966cf-j7w8t\" (UID: \"497a9259-1ec1-47b8-b93e-820de80b527a\") " pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.865999 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/497a9259-1ec1-47b8-b93e-820de80b527a-config\") pod \"neutron-5788c966cf-j7w8t\" (UID: \"497a9259-1ec1-47b8-b93e-820de80b527a\") " pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.866648 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nlck\" (UniqueName: \"kubernetes.io/projected/497a9259-1ec1-47b8-b93e-820de80b527a-kube-api-access-9nlck\") pod \"neutron-5788c966cf-j7w8t\" (UID: \"497a9259-1ec1-47b8-b93e-820de80b527a\") " pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:48 crc kubenswrapper[4693]: I1212 16:12:48.996693 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:49 crc kubenswrapper[4693]: I1212 16:12:49.376917 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="701784f3-b9b0-4f46-82e5-c07ba162de29" path="/var/lib/kubelet/pods/701784f3-b9b0-4f46-82e5-c07ba162de29/volumes" Dec 12 16:12:49 crc kubenswrapper[4693]: I1212 16:12:49.482593 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" event={"ID":"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb","Type":"ContainerStarted","Data":"944bdd326496ed4b3f51540d97a851a4fa42bf210e03d618070553ca24900ea6"} Dec 12 16:12:49 crc kubenswrapper[4693]: I1212 16:12:49.486608 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ht5v" event={"ID":"f907e8ff-aa6a-44c2-a4ca-d73203442782","Type":"ContainerStarted","Data":"9256f902816c8fa2b303feb5e738bcd41e4f91d80c3516c27b07f1c111ae7aed"} Dec 12 16:12:49 crc kubenswrapper[4693]: I1212 16:12:49.490847 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7cc41f5a-262d-4513-b064-c12dfea1625b","Type":"ContainerStarted","Data":"f20310758fe4242a78e4ad9f6fd236ff2aecf7b25845f19285ce33e9de67030f"} Dec 12 16:12:49 crc kubenswrapper[4693]: I1212 16:12:49.505134 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68cb54fffb-t6qwm" event={"ID":"d5518acc-0be1-4b59-874d-61eeb018a534","Type":"ContainerStarted","Data":"f0684416f63a3faafc53939611786d5ca688417f4c669ac6ab70fbc3c3d18802"} Dec 12 16:12:49 crc kubenswrapper[4693]: I1212 16:12:49.530325 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"96c548dd-1e5e-485e-b823-0ec52cacadf5","Type":"ContainerStarted","Data":"b4e8634751f970ac902ffa72310e23c020076d7f3457f9a21341b792349979a3"} Dec 12 16:12:49 crc kubenswrapper[4693]: I1212 16:12:49.530850 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="96c548dd-1e5e-485e-b823-0ec52cacadf5" containerName="glance-log" containerID="cri-o://a3c9edb7745dc68fb8ac71aff4bece0dd3c7da70a8c1b5190ecb337ee5105cae" gracePeriod=30 Dec 12 16:12:49 crc kubenswrapper[4693]: I1212 16:12:49.531163 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="96c548dd-1e5e-485e-b823-0ec52cacadf5" containerName="glance-httpd" containerID="cri-o://b4e8634751f970ac902ffa72310e23c020076d7f3457f9a21341b792349979a3" gracePeriod=30 Dec 12 16:12:49 crc kubenswrapper[4693]: I1212 16:12:49.615799 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=28.615771069 podStartE2EDuration="28.615771069s" podCreationTimestamp="2025-12-12 16:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:12:49.603754125 +0000 UTC m=+1596.772393736" watchObservedRunningTime="2025-12-12 16:12:49.615771069 +0000 UTC m=+1596.784410680" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.029184 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5788c966cf-j7w8t"] Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.289977 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.406756 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\") pod \"96c548dd-1e5e-485e-b823-0ec52cacadf5\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.406856 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96c548dd-1e5e-485e-b823-0ec52cacadf5-httpd-run\") pod \"96c548dd-1e5e-485e-b823-0ec52cacadf5\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.407044 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c548dd-1e5e-485e-b823-0ec52cacadf5-config-data\") pod \"96c548dd-1e5e-485e-b823-0ec52cacadf5\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.407094 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96c548dd-1e5e-485e-b823-0ec52cacadf5-logs\") pod \"96c548dd-1e5e-485e-b823-0ec52cacadf5\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.407241 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96c548dd-1e5e-485e-b823-0ec52cacadf5-scripts\") pod \"96c548dd-1e5e-485e-b823-0ec52cacadf5\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.407389 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6dbj\" (UniqueName: \"kubernetes.io/projected/96c548dd-1e5e-485e-b823-0ec52cacadf5-kube-api-access-s6dbj\") pod \"96c548dd-1e5e-485e-b823-0ec52cacadf5\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.407413 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c548dd-1e5e-485e-b823-0ec52cacadf5-combined-ca-bundle\") pod \"96c548dd-1e5e-485e-b823-0ec52cacadf5\" (UID: \"96c548dd-1e5e-485e-b823-0ec52cacadf5\") " Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.408011 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96c548dd-1e5e-485e-b823-0ec52cacadf5-logs" (OuterVolumeSpecName: "logs") pod "96c548dd-1e5e-485e-b823-0ec52cacadf5" (UID: "96c548dd-1e5e-485e-b823-0ec52cacadf5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.408186 4693 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96c548dd-1e5e-485e-b823-0ec52cacadf5-logs\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.408683 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96c548dd-1e5e-485e-b823-0ec52cacadf5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "96c548dd-1e5e-485e-b823-0ec52cacadf5" (UID: "96c548dd-1e5e-485e-b823-0ec52cacadf5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.412547 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c548dd-1e5e-485e-b823-0ec52cacadf5-scripts" (OuterVolumeSpecName: "scripts") pod "96c548dd-1e5e-485e-b823-0ec52cacadf5" (UID: "96c548dd-1e5e-485e-b823-0ec52cacadf5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.414497 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96c548dd-1e5e-485e-b823-0ec52cacadf5-kube-api-access-s6dbj" (OuterVolumeSpecName: "kube-api-access-s6dbj") pod "96c548dd-1e5e-485e-b823-0ec52cacadf5" (UID: "96c548dd-1e5e-485e-b823-0ec52cacadf5"). InnerVolumeSpecName "kube-api-access-s6dbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.425230 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596" (OuterVolumeSpecName: "glance") pod "96c548dd-1e5e-485e-b823-0ec52cacadf5" (UID: "96c548dd-1e5e-485e-b823-0ec52cacadf5"). InnerVolumeSpecName "pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.449356 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c548dd-1e5e-485e-b823-0ec52cacadf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96c548dd-1e5e-485e-b823-0ec52cacadf5" (UID: "96c548dd-1e5e-485e-b823-0ec52cacadf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.476468 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c548dd-1e5e-485e-b823-0ec52cacadf5-config-data" (OuterVolumeSpecName: "config-data") pod "96c548dd-1e5e-485e-b823-0ec52cacadf5" (UID: "96c548dd-1e5e-485e-b823-0ec52cacadf5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.510552 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96c548dd-1e5e-485e-b823-0ec52cacadf5-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.510585 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6dbj\" (UniqueName: \"kubernetes.io/projected/96c548dd-1e5e-485e-b823-0ec52cacadf5-kube-api-access-s6dbj\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.510597 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c548dd-1e5e-485e-b823-0ec52cacadf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.510633 4693 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\") on node \"crc\" " Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.510643 4693 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96c548dd-1e5e-485e-b823-0ec52cacadf5-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.510652 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c548dd-1e5e-485e-b823-0ec52cacadf5-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.551806 4693 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.552026 4693 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596") on node "crc" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.556605 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5788c966cf-j7w8t" event={"ID":"497a9259-1ec1-47b8-b93e-820de80b527a","Type":"ContainerStarted","Data":"32683a91ecee84864305ae130dd07e7e01b63114f6fcfc8eaa4e5d9e20ddd059"} Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.559915 4693 generic.go:334] "Generic (PLEG): container finished" podID="96c548dd-1e5e-485e-b823-0ec52cacadf5" containerID="b4e8634751f970ac902ffa72310e23c020076d7f3457f9a21341b792349979a3" exitCode=143 Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.559939 4693 generic.go:334] "Generic (PLEG): container finished" podID="96c548dd-1e5e-485e-b823-0ec52cacadf5" containerID="a3c9edb7745dc68fb8ac71aff4bece0dd3c7da70a8c1b5190ecb337ee5105cae" exitCode=143 Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.559980 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"96c548dd-1e5e-485e-b823-0ec52cacadf5","Type":"ContainerDied","Data":"b4e8634751f970ac902ffa72310e23c020076d7f3457f9a21341b792349979a3"} Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.559999 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"96c548dd-1e5e-485e-b823-0ec52cacadf5","Type":"ContainerDied","Data":"a3c9edb7745dc68fb8ac71aff4bece0dd3c7da70a8c1b5190ecb337ee5105cae"} Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.560009 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"96c548dd-1e5e-485e-b823-0ec52cacadf5","Type":"ContainerDied","Data":"e5418fb0e4331133d017ac5b5b9d15899615ffe658f95913b1e15db4ab4c1875"} Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.560024 4693 scope.go:117] "RemoveContainer" containerID="b4e8634751f970ac902ffa72310e23c020076d7f3457f9a21341b792349979a3" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.560142 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.565025 4693 generic.go:334] "Generic (PLEG): container finished" podID="c7f0e3f3-2cff-44bd-9f42-9736dc1947bb" containerID="944bdd326496ed4b3f51540d97a851a4fa42bf210e03d618070553ca24900ea6" exitCode=0 Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.565112 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" event={"ID":"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb","Type":"ContainerDied","Data":"944bdd326496ed4b3f51540d97a851a4fa42bf210e03d618070553ca24900ea6"} Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.582555 4693 generic.go:334] "Generic (PLEG): container finished" podID="f907e8ff-aa6a-44c2-a4ca-d73203442782" containerID="9256f902816c8fa2b303feb5e738bcd41e4f91d80c3516c27b07f1c111ae7aed" exitCode=0 Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.582633 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ht5v" event={"ID":"f907e8ff-aa6a-44c2-a4ca-d73203442782","Type":"ContainerDied","Data":"9256f902816c8fa2b303feb5e738bcd41e4f91d80c3516c27b07f1c111ae7aed"} Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.603766 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68cb54fffb-t6qwm" event={"ID":"d5518acc-0be1-4b59-874d-61eeb018a534","Type":"ContainerStarted","Data":"caa1e11e8c9cc286ab857822f3d6e99bf2856df54e087cb0fc6b52ceede8c666"} Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.606008 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-68cb54fffb-t6qwm" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.617191 4693 reconciler_common.go:293] "Volume detached for volume \"pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.651706 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-68cb54fffb-t6qwm" podStartSLOduration=4.651681665 podStartE2EDuration="4.651681665s" podCreationTimestamp="2025-12-12 16:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:12:50.637507543 +0000 UTC m=+1597.806147144" watchObservedRunningTime="2025-12-12 16:12:50.651681665 +0000 UTC m=+1597.820321266" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.746145 4693 scope.go:117] "RemoveContainer" containerID="a3c9edb7745dc68fb8ac71aff4bece0dd3c7da70a8c1b5190ecb337ee5105cae" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.749426 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.760371 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.796067 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 16:12:50 crc kubenswrapper[4693]: E1212 16:12:50.796670 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c548dd-1e5e-485e-b823-0ec52cacadf5" containerName="glance-log" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.796698 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c548dd-1e5e-485e-b823-0ec52cacadf5" containerName="glance-log" Dec 12 16:12:50 crc kubenswrapper[4693]: E1212 16:12:50.796737 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c548dd-1e5e-485e-b823-0ec52cacadf5" containerName="glance-httpd" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.796747 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c548dd-1e5e-485e-b823-0ec52cacadf5" containerName="glance-httpd" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.797023 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c548dd-1e5e-485e-b823-0ec52cacadf5" containerName="glance-log" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.797065 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c548dd-1e5e-485e-b823-0ec52cacadf5" containerName="glance-httpd" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.798611 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.803689 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.803939 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.813670 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.897009 4693 scope.go:117] "RemoveContainer" containerID="b4e8634751f970ac902ffa72310e23c020076d7f3457f9a21341b792349979a3" Dec 12 16:12:50 crc kubenswrapper[4693]: E1212 16:12:50.898018 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4e8634751f970ac902ffa72310e23c020076d7f3457f9a21341b792349979a3\": container with ID starting with b4e8634751f970ac902ffa72310e23c020076d7f3457f9a21341b792349979a3 not found: ID does not exist" containerID="b4e8634751f970ac902ffa72310e23c020076d7f3457f9a21341b792349979a3" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.898048 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e8634751f970ac902ffa72310e23c020076d7f3457f9a21341b792349979a3"} err="failed to get container status \"b4e8634751f970ac902ffa72310e23c020076d7f3457f9a21341b792349979a3\": rpc error: code = NotFound desc = could not find container \"b4e8634751f970ac902ffa72310e23c020076d7f3457f9a21341b792349979a3\": container with ID starting with b4e8634751f970ac902ffa72310e23c020076d7f3457f9a21341b792349979a3 not found: ID does not exist" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.898069 4693 scope.go:117] "RemoveContainer" containerID="a3c9edb7745dc68fb8ac71aff4bece0dd3c7da70a8c1b5190ecb337ee5105cae" Dec 12 16:12:50 crc kubenswrapper[4693]: E1212 16:12:50.898865 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c9edb7745dc68fb8ac71aff4bece0dd3c7da70a8c1b5190ecb337ee5105cae\": container with ID starting with a3c9edb7745dc68fb8ac71aff4bece0dd3c7da70a8c1b5190ecb337ee5105cae not found: ID does not exist" containerID="a3c9edb7745dc68fb8ac71aff4bece0dd3c7da70a8c1b5190ecb337ee5105cae" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.898909 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c9edb7745dc68fb8ac71aff4bece0dd3c7da70a8c1b5190ecb337ee5105cae"} err="failed to get container status \"a3c9edb7745dc68fb8ac71aff4bece0dd3c7da70a8c1b5190ecb337ee5105cae\": rpc error: code = NotFound desc = could not find container \"a3c9edb7745dc68fb8ac71aff4bece0dd3c7da70a8c1b5190ecb337ee5105cae\": container with ID starting with a3c9edb7745dc68fb8ac71aff4bece0dd3c7da70a8c1b5190ecb337ee5105cae not found: ID does not exist" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.898935 4693 scope.go:117] "RemoveContainer" containerID="b4e8634751f970ac902ffa72310e23c020076d7f3457f9a21341b792349979a3" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.899595 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e8634751f970ac902ffa72310e23c020076d7f3457f9a21341b792349979a3"} err="failed to get container status \"b4e8634751f970ac902ffa72310e23c020076d7f3457f9a21341b792349979a3\": rpc error: code = NotFound desc = could not find container \"b4e8634751f970ac902ffa72310e23c020076d7f3457f9a21341b792349979a3\": container with ID starting with b4e8634751f970ac902ffa72310e23c020076d7f3457f9a21341b792349979a3 not found: ID does not exist" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.899613 4693 scope.go:117] "RemoveContainer" containerID="a3c9edb7745dc68fb8ac71aff4bece0dd3c7da70a8c1b5190ecb337ee5105cae" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.902314 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c9edb7745dc68fb8ac71aff4bece0dd3c7da70a8c1b5190ecb337ee5105cae"} err="failed to get container status \"a3c9edb7745dc68fb8ac71aff4bece0dd3c7da70a8c1b5190ecb337ee5105cae\": rpc error: code = NotFound desc = could not find container \"a3c9edb7745dc68fb8ac71aff4bece0dd3c7da70a8c1b5190ecb337ee5105cae\": container with ID starting with a3c9edb7745dc68fb8ac71aff4bece0dd3c7da70a8c1b5190ecb337ee5105cae not found: ID does not exist" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.981216 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzdnw\" (UniqueName: \"kubernetes.io/projected/ad919c71-0957-489d-8ae6-a69c33ab65b5-kube-api-access-rzdnw\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.981324 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ad919c71-0957-489d-8ae6-a69c33ab65b5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.981391 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.981422 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.981457 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.981478 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.981517 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad919c71-0957-489d-8ae6-a69c33ab65b5-logs\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:50 crc kubenswrapper[4693]: I1212 16:12:50.981535 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.083424 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.083487 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.083531 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.083552 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.083636 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad919c71-0957-489d-8ae6-a69c33ab65b5-logs\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.083654 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.083706 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzdnw\" (UniqueName: \"kubernetes.io/projected/ad919c71-0957-489d-8ae6-a69c33ab65b5-kube-api-access-rzdnw\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.083774 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ad919c71-0957-489d-8ae6-a69c33ab65b5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.084193 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ad919c71-0957-489d-8ae6-a69c33ab65b5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.084901 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad919c71-0957-489d-8ae6-a69c33ab65b5-logs\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.091938 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.092631 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.092995 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.096070 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.096109 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/50f66c90a996e74d8c468f9c70fbfefea786aa10bcb017475eedefeec7499784/globalmount\"" pod="openstack/glance-default-internal-api-0" Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.102587 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.115077 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzdnw\" (UniqueName: \"kubernetes.io/projected/ad919c71-0957-489d-8ae6-a69c33ab65b5-kube-api-access-rzdnw\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.174568 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\") pod \"glance-default-internal-api-0\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.211958 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.406127 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96c548dd-1e5e-485e-b823-0ec52cacadf5" path="/var/lib/kubelet/pods/96c548dd-1e5e-485e-b823-0ec52cacadf5/volumes" Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.619369 4693 generic.go:334] "Generic (PLEG): container finished" podID="9560987d-9cb8-4361-9c44-3be630e46634" containerID="f22fb99ecf781d897c8de73974db44be5f6cb5c0848f6f592e99eff8c1281886" exitCode=0 Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.619459 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-26nbh" event={"ID":"9560987d-9cb8-4361-9c44-3be630e46634","Type":"ContainerDied","Data":"f22fb99ecf781d897c8de73974db44be5f6cb5c0848f6f592e99eff8c1281886"} Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.629757 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" event={"ID":"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb","Type":"ContainerStarted","Data":"803d23edb766c615520609a0d9fe607f675ce9478744b4ac67b439452885d411"} Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.629858 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.632163 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7cc41f5a-262d-4513-b064-c12dfea1625b","Type":"ContainerStarted","Data":"22a941cda336e4887fc2b6efeb9093f8770b4f941180aeea86ba80f9fbd77daa"} Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.635337 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7cc41f5a-262d-4513-b064-c12dfea1625b" containerName="glance-log" containerID="cri-o://f20310758fe4242a78e4ad9f6fd236ff2aecf7b25845f19285ce33e9de67030f" gracePeriod=30 Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.635490 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7cc41f5a-262d-4513-b064-c12dfea1625b" containerName="glance-httpd" containerID="cri-o://22a941cda336e4887fc2b6efeb9093f8770b4f941180aeea86ba80f9fbd77daa" gracePeriod=30 Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.643726 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5788c966cf-j7w8t" event={"ID":"497a9259-1ec1-47b8-b93e-820de80b527a","Type":"ContainerStarted","Data":"7bf5d4b699b36720dff76edd442629c78a9110ee783bd6270f018e7585d231e2"} Dec 12 16:12:51 crc kubenswrapper[4693]: I1212 16:12:51.668165 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=31.668132837 podStartE2EDuration="31.668132837s" podCreationTimestamp="2025-12-12 16:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:12:51.665406174 +0000 UTC m=+1598.834045805" watchObservedRunningTime="2025-12-12 16:12:51.668132837 +0000 UTC m=+1598.836772438" Dec 12 16:12:52 crc kubenswrapper[4693]: I1212 16:12:52.142375 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 12 16:12:52 crc kubenswrapper[4693]: I1212 16:12:52.142463 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 12 16:12:52 crc kubenswrapper[4693]: I1212 16:12:52.667847 4693 generic.go:334] "Generic (PLEG): container finished" podID="7cc41f5a-262d-4513-b064-c12dfea1625b" containerID="22a941cda336e4887fc2b6efeb9093f8770b4f941180aeea86ba80f9fbd77daa" exitCode=0 Dec 12 16:12:52 crc kubenswrapper[4693]: I1212 16:12:52.667886 4693 generic.go:334] "Generic (PLEG): container finished" podID="7cc41f5a-262d-4513-b064-c12dfea1625b" containerID="f20310758fe4242a78e4ad9f6fd236ff2aecf7b25845f19285ce33e9de67030f" exitCode=143 Dec 12 16:12:52 crc kubenswrapper[4693]: I1212 16:12:52.667914 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7cc41f5a-262d-4513-b064-c12dfea1625b","Type":"ContainerDied","Data":"22a941cda336e4887fc2b6efeb9093f8770b4f941180aeea86ba80f9fbd77daa"} Dec 12 16:12:52 crc kubenswrapper[4693]: I1212 16:12:52.667949 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7cc41f5a-262d-4513-b064-c12dfea1625b","Type":"ContainerDied","Data":"f20310758fe4242a78e4ad9f6fd236ff2aecf7b25845f19285ce33e9de67030f"} Dec 12 16:12:53 crc kubenswrapper[4693]: I1212 16:12:53.425707 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" podStartSLOduration=7.425048687 podStartE2EDuration="7.425048687s" podCreationTimestamp="2025-12-12 16:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:12:51.693261493 +0000 UTC m=+1598.861901094" watchObservedRunningTime="2025-12-12 16:12:53.425048687 +0000 UTC m=+1600.593688288" Dec 12 16:12:53 crc kubenswrapper[4693]: I1212 16:12:53.685567 4693 generic.go:334] "Generic (PLEG): container finished" podID="a85781c9-4fb2-4c98-802d-d9fa60ff72e0" containerID="318c91efc08fa6feccde5468ecdf5ab4d6c67c2147e6f7014d182371342bbd32" exitCode=0 Dec 12 16:12:53 crc kubenswrapper[4693]: I1212 16:12:53.685610 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8d2xs" event={"ID":"a85781c9-4fb2-4c98-802d-d9fa60ff72e0","Type":"ContainerDied","Data":"318c91efc08fa6feccde5468ecdf5ab4d6c67c2147e6f7014d182371342bbd32"} Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.290146 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-26nbh" Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.295672 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8d2xs" Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.452108 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnwcm\" (UniqueName: \"kubernetes.io/projected/9560987d-9cb8-4361-9c44-3be630e46634-kube-api-access-rnwcm\") pod \"9560987d-9cb8-4361-9c44-3be630e46634\" (UID: \"9560987d-9cb8-4361-9c44-3be630e46634\") " Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.452734 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-credential-keys\") pod \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.452867 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-scripts\") pod \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.452909 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9560987d-9cb8-4361-9c44-3be630e46634-logs\") pod \"9560987d-9cb8-4361-9c44-3be630e46634\" (UID: \"9560987d-9cb8-4361-9c44-3be630e46634\") " Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.452954 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9560987d-9cb8-4361-9c44-3be630e46634-config-data\") pod \"9560987d-9cb8-4361-9c44-3be630e46634\" (UID: \"9560987d-9cb8-4361-9c44-3be630e46634\") " Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.452987 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-config-data\") pod \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.453140 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-combined-ca-bundle\") pod \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.453177 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-fernet-keys\") pod \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.453232 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc9sc\" (UniqueName: \"kubernetes.io/projected/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-kube-api-access-gc9sc\") pod \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\" (UID: \"a85781c9-4fb2-4c98-802d-d9fa60ff72e0\") " Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.453393 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9560987d-9cb8-4361-9c44-3be630e46634-scripts\") pod \"9560987d-9cb8-4361-9c44-3be630e46634\" (UID: \"9560987d-9cb8-4361-9c44-3be630e46634\") " Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.453456 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9560987d-9cb8-4361-9c44-3be630e46634-combined-ca-bundle\") pod \"9560987d-9cb8-4361-9c44-3be630e46634\" (UID: \"9560987d-9cb8-4361-9c44-3be630e46634\") " Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.453815 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9560987d-9cb8-4361-9c44-3be630e46634-logs" (OuterVolumeSpecName: "logs") pod "9560987d-9cb8-4361-9c44-3be630e46634" (UID: "9560987d-9cb8-4361-9c44-3be630e46634"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.454648 4693 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9560987d-9cb8-4361-9c44-3be630e46634-logs\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.484474 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-kube-api-access-gc9sc" (OuterVolumeSpecName: "kube-api-access-gc9sc") pod "a85781c9-4fb2-4c98-802d-d9fa60ff72e0" (UID: "a85781c9-4fb2-4c98-802d-d9fa60ff72e0"). InnerVolumeSpecName "kube-api-access-gc9sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.487967 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9560987d-9cb8-4361-9c44-3be630e46634-scripts" (OuterVolumeSpecName: "scripts") pod "9560987d-9cb8-4361-9c44-3be630e46634" (UID: "9560987d-9cb8-4361-9c44-3be630e46634"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.489546 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9560987d-9cb8-4361-9c44-3be630e46634-kube-api-access-rnwcm" (OuterVolumeSpecName: "kube-api-access-rnwcm") pod "9560987d-9cb8-4361-9c44-3be630e46634" (UID: "9560987d-9cb8-4361-9c44-3be630e46634"). InnerVolumeSpecName "kube-api-access-rnwcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.526591 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a85781c9-4fb2-4c98-802d-d9fa60ff72e0" (UID: "a85781c9-4fb2-4c98-802d-d9fa60ff72e0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.526760 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a85781c9-4fb2-4c98-802d-d9fa60ff72e0" (UID: "a85781c9-4fb2-4c98-802d-d9fa60ff72e0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.547980 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-scripts" (OuterVolumeSpecName: "scripts") pod "a85781c9-4fb2-4c98-802d-d9fa60ff72e0" (UID: "a85781c9-4fb2-4c98-802d-d9fa60ff72e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.559005 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnwcm\" (UniqueName: \"kubernetes.io/projected/9560987d-9cb8-4361-9c44-3be630e46634-kube-api-access-rnwcm\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.559258 4693 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.559426 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.559509 4693 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.559599 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc9sc\" (UniqueName: \"kubernetes.io/projected/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-kube-api-access-gc9sc\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.559687 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9560987d-9cb8-4361-9c44-3be630e46634-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.658144 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.791621 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-26nbh" event={"ID":"9560987d-9cb8-4361-9c44-3be630e46634","Type":"ContainerDied","Data":"f5f9f5bd4dcbff5460fd86db0b1aac2dd978203c834c945c174b47373d1ac892"} Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.792142 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5f9f5bd4dcbff5460fd86db0b1aac2dd978203c834c945c174b47373d1ac892" Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.792241 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-26nbh" Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.813725 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8d2xs" event={"ID":"a85781c9-4fb2-4c98-802d-d9fa60ff72e0","Type":"ContainerDied","Data":"5980c14a286a5c91786f71579d28f02bc9ec00144ae02039f513f8c33aaed198"} Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.813772 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5980c14a286a5c91786f71579d28f02bc9ec00144ae02039f513f8c33aaed198" Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.813850 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8d2xs" Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.884536 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-chngd"] Dec 12 16:12:56 crc kubenswrapper[4693]: I1212 16:12:56.884846 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" podUID="3d6ab82c-1fed-49ec-b76a-ba30dd458f26" containerName="dnsmasq-dns" containerID="cri-o://38760689a66bed25a0cffc2c3c10fac796a5863301b3c8867251d48b3ebec7c8" gracePeriod=10 Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.116365 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-config-data" (OuterVolumeSpecName: "config-data") pod "a85781c9-4fb2-4c98-802d-d9fa60ff72e0" (UID: "a85781c9-4fb2-4c98-802d-d9fa60ff72e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.139825 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9560987d-9cb8-4361-9c44-3be630e46634-config-data" (OuterVolumeSpecName: "config-data") pod "9560987d-9cb8-4361-9c44-3be630e46634" (UID: "9560987d-9cb8-4361-9c44-3be630e46634"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.170788 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9560987d-9cb8-4361-9c44-3be630e46634-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.170869 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.192656 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a85781c9-4fb2-4c98-802d-d9fa60ff72e0" (UID: "a85781c9-4fb2-4c98-802d-d9fa60ff72e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.204128 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9560987d-9cb8-4361-9c44-3be630e46634-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9560987d-9cb8-4361-9c44-3be630e46634" (UID: "9560987d-9cb8-4361-9c44-3be630e46634"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.282843 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a85781c9-4fb2-4c98-802d-d9fa60ff72e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.282937 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9560987d-9cb8-4361-9c44-3be630e46634-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.506891 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-79886c984d-rtppc"] Dec 12 16:12:57 crc kubenswrapper[4693]: E1212 16:12:57.517005 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9560987d-9cb8-4361-9c44-3be630e46634" containerName="placement-db-sync" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.517041 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="9560987d-9cb8-4361-9c44-3be630e46634" containerName="placement-db-sync" Dec 12 16:12:57 crc kubenswrapper[4693]: E1212 16:12:57.517081 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a85781c9-4fb2-4c98-802d-d9fa60ff72e0" containerName="keystone-bootstrap" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.517089 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="a85781c9-4fb2-4c98-802d-d9fa60ff72e0" containerName="keystone-bootstrap" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.517452 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="a85781c9-4fb2-4c98-802d-d9fa60ff72e0" containerName="keystone-bootstrap" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.517472 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="9560987d-9cb8-4361-9c44-3be630e46634" containerName="placement-db-sync" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.521807 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.535130 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.535411 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.535637 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.535824 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.542952 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-tqrpr" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.542980 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-79886c984d-rtppc"] Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.579835 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-777f5984bd-7smjx"] Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.581493 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.587394 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.587671 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.594252 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.595675 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.595843 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9xp5g" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.595971 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.607241 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-777f5984bd-7smjx"] Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.607669 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwcvg\" (UniqueName: \"kubernetes.io/projected/93ac9f81-903f-43c7-9846-c4732f2fecea-kube-api-access-qwcvg\") pod \"placement-79886c984d-rtppc\" (UID: \"93ac9f81-903f-43c7-9846-c4732f2fecea\") " pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.607732 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93ac9f81-903f-43c7-9846-c4732f2fecea-logs\") pod \"placement-79886c984d-rtppc\" (UID: \"93ac9f81-903f-43c7-9846-c4732f2fecea\") " pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.607766 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ac9f81-903f-43c7-9846-c4732f2fecea-config-data\") pod \"placement-79886c984d-rtppc\" (UID: \"93ac9f81-903f-43c7-9846-c4732f2fecea\") " pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.607799 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93ac9f81-903f-43c7-9846-c4732f2fecea-public-tls-certs\") pod \"placement-79886c984d-rtppc\" (UID: \"93ac9f81-903f-43c7-9846-c4732f2fecea\") " pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.607824 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93ac9f81-903f-43c7-9846-c4732f2fecea-combined-ca-bundle\") pod \"placement-79886c984d-rtppc\" (UID: \"93ac9f81-903f-43c7-9846-c4732f2fecea\") " pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.607944 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93ac9f81-903f-43c7-9846-c4732f2fecea-internal-tls-certs\") pod \"placement-79886c984d-rtppc\" (UID: \"93ac9f81-903f-43c7-9846-c4732f2fecea\") " pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.607988 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93ac9f81-903f-43c7-9846-c4732f2fecea-scripts\") pod \"placement-79886c984d-rtppc\" (UID: \"93ac9f81-903f-43c7-9846-c4732f2fecea\") " pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.648143 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.672024 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.711988 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e76970f-0d47-4ec1-93a6-7ede9b782808-fernet-keys\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.712077 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwcvg\" (UniqueName: \"kubernetes.io/projected/93ac9f81-903f-43c7-9846-c4732f2fecea-kube-api-access-qwcvg\") pod \"placement-79886c984d-rtppc\" (UID: \"93ac9f81-903f-43c7-9846-c4732f2fecea\") " pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.712117 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93ac9f81-903f-43c7-9846-c4732f2fecea-logs\") pod \"placement-79886c984d-rtppc\" (UID: \"93ac9f81-903f-43c7-9846-c4732f2fecea\") " pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.712152 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ac9f81-903f-43c7-9846-c4732f2fecea-config-data\") pod \"placement-79886c984d-rtppc\" (UID: \"93ac9f81-903f-43c7-9846-c4732f2fecea\") " pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.712197 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93ac9f81-903f-43c7-9846-c4732f2fecea-public-tls-certs\") pod \"placement-79886c984d-rtppc\" (UID: \"93ac9f81-903f-43c7-9846-c4732f2fecea\") " pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.712229 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93ac9f81-903f-43c7-9846-c4732f2fecea-combined-ca-bundle\") pod \"placement-79886c984d-rtppc\" (UID: \"93ac9f81-903f-43c7-9846-c4732f2fecea\") " pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.712375 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e76970f-0d47-4ec1-93a6-7ede9b782808-config-data\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.712406 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e76970f-0d47-4ec1-93a6-7ede9b782808-public-tls-certs\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.712433 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r84c6\" (UniqueName: \"kubernetes.io/projected/4e76970f-0d47-4ec1-93a6-7ede9b782808-kube-api-access-r84c6\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.712492 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e76970f-0d47-4ec1-93a6-7ede9b782808-combined-ca-bundle\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.712528 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93ac9f81-903f-43c7-9846-c4732f2fecea-internal-tls-certs\") pod \"placement-79886c984d-rtppc\" (UID: \"93ac9f81-903f-43c7-9846-c4732f2fecea\") " pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.712570 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e76970f-0d47-4ec1-93a6-7ede9b782808-credential-keys\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.712595 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93ac9f81-903f-43c7-9846-c4732f2fecea-scripts\") pod \"placement-79886c984d-rtppc\" (UID: \"93ac9f81-903f-43c7-9846-c4732f2fecea\") " pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.712622 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e76970f-0d47-4ec1-93a6-7ede9b782808-internal-tls-certs\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.712675 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e76970f-0d47-4ec1-93a6-7ede9b782808-scripts\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.714048 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93ac9f81-903f-43c7-9846-c4732f2fecea-logs\") pod \"placement-79886c984d-rtppc\" (UID: \"93ac9f81-903f-43c7-9846-c4732f2fecea\") " pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.722152 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ac9f81-903f-43c7-9846-c4732f2fecea-config-data\") pod \"placement-79886c984d-rtppc\" (UID: \"93ac9f81-903f-43c7-9846-c4732f2fecea\") " pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.727230 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93ac9f81-903f-43c7-9846-c4732f2fecea-internal-tls-certs\") pod \"placement-79886c984d-rtppc\" (UID: \"93ac9f81-903f-43c7-9846-c4732f2fecea\") " pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.731960 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93ac9f81-903f-43c7-9846-c4732f2fecea-combined-ca-bundle\") pod \"placement-79886c984d-rtppc\" (UID: \"93ac9f81-903f-43c7-9846-c4732f2fecea\") " pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.734556 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93ac9f81-903f-43c7-9846-c4732f2fecea-scripts\") pod \"placement-79886c984d-rtppc\" (UID: \"93ac9f81-903f-43c7-9846-c4732f2fecea\") " pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.734614 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwcvg\" (UniqueName: \"kubernetes.io/projected/93ac9f81-903f-43c7-9846-c4732f2fecea-kube-api-access-qwcvg\") pod \"placement-79886c984d-rtppc\" (UID: \"93ac9f81-903f-43c7-9846-c4732f2fecea\") " pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.738347 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93ac9f81-903f-43c7-9846-c4732f2fecea-public-tls-certs\") pod \"placement-79886c984d-rtppc\" (UID: \"93ac9f81-903f-43c7-9846-c4732f2fecea\") " pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.817646 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cc41f5a-262d-4513-b064-c12dfea1625b-logs\") pod \"7cc41f5a-262d-4513-b064-c12dfea1625b\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.817877 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cc41f5a-262d-4513-b064-c12dfea1625b-scripts\") pod \"7cc41f5a-262d-4513-b064-c12dfea1625b\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.818009 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc41f5a-262d-4513-b064-c12dfea1625b-combined-ca-bundle\") pod \"7cc41f5a-262d-4513-b064-c12dfea1625b\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.818203 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc41f5a-262d-4513-b064-c12dfea1625b-config-data\") pod \"7cc41f5a-262d-4513-b064-c12dfea1625b\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.819059 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cc41f5a-262d-4513-b064-c12dfea1625b-logs" (OuterVolumeSpecName: "logs") pod "7cc41f5a-262d-4513-b064-c12dfea1625b" (UID: "7cc41f5a-262d-4513-b064-c12dfea1625b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.834260 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-79886c984d-rtppc" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.835417 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czvhl\" (UniqueName: \"kubernetes.io/projected/7cc41f5a-262d-4513-b064-c12dfea1625b-kube-api-access-czvhl\") pod \"7cc41f5a-262d-4513-b064-c12dfea1625b\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.835589 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cc41f5a-262d-4513-b064-c12dfea1625b-httpd-run\") pod \"7cc41f5a-262d-4513-b064-c12dfea1625b\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.835969 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\") pod \"7cc41f5a-262d-4513-b064-c12dfea1625b\" (UID: \"7cc41f5a-262d-4513-b064-c12dfea1625b\") " Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.836808 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cc41f5a-262d-4513-b064-c12dfea1625b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7cc41f5a-262d-4513-b064-c12dfea1625b" (UID: "7cc41f5a-262d-4513-b064-c12dfea1625b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.838394 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc41f5a-262d-4513-b064-c12dfea1625b-scripts" (OuterVolumeSpecName: "scripts") pod "7cc41f5a-262d-4513-b064-c12dfea1625b" (UID: "7cc41f5a-262d-4513-b064-c12dfea1625b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.839418 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e76970f-0d47-4ec1-93a6-7ede9b782808-config-data\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.839449 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e76970f-0d47-4ec1-93a6-7ede9b782808-public-tls-certs\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.839480 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r84c6\" (UniqueName: \"kubernetes.io/projected/4e76970f-0d47-4ec1-93a6-7ede9b782808-kube-api-access-r84c6\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.839553 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e76970f-0d47-4ec1-93a6-7ede9b782808-combined-ca-bundle\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.839640 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e76970f-0d47-4ec1-93a6-7ede9b782808-credential-keys\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.839676 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e76970f-0d47-4ec1-93a6-7ede9b782808-internal-tls-certs\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.839762 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e76970f-0d47-4ec1-93a6-7ede9b782808-scripts\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.839941 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e76970f-0d47-4ec1-93a6-7ede9b782808-fernet-keys\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.840111 4693 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cc41f5a-262d-4513-b064-c12dfea1625b-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.840124 4693 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cc41f5a-262d-4513-b064-c12dfea1625b-logs\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.840134 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cc41f5a-262d-4513-b064-c12dfea1625b-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.852590 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7cc41f5a-262d-4513-b064-c12dfea1625b","Type":"ContainerDied","Data":"819425a25b4684dd594d7f5afc383aab3f6c2a762b127547c52a9d3310274c69"} Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.852648 4693 scope.go:117] "RemoveContainer" containerID="22a941cda336e4887fc2b6efeb9093f8770b4f941180aeea86ba80f9fbd77daa" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.852774 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.856249 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cc41f5a-262d-4513-b064-c12dfea1625b-kube-api-access-czvhl" (OuterVolumeSpecName: "kube-api-access-czvhl") pod "7cc41f5a-262d-4513-b064-c12dfea1625b" (UID: "7cc41f5a-262d-4513-b064-c12dfea1625b"). InnerVolumeSpecName "kube-api-access-czvhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.867148 4693 generic.go:334] "Generic (PLEG): container finished" podID="3d6ab82c-1fed-49ec-b76a-ba30dd458f26" containerID="38760689a66bed25a0cffc2c3c10fac796a5863301b3c8867251d48b3ebec7c8" exitCode=0 Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.867239 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e76970f-0d47-4ec1-93a6-7ede9b782808-fernet-keys\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.867261 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" event={"ID":"3d6ab82c-1fed-49ec-b76a-ba30dd458f26","Type":"ContainerDied","Data":"38760689a66bed25a0cffc2c3c10fac796a5863301b3c8867251d48b3ebec7c8"} Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.868431 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e76970f-0d47-4ec1-93a6-7ede9b782808-scripts\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.869128 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e76970f-0d47-4ec1-93a6-7ede9b782808-combined-ca-bundle\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.871281 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e76970f-0d47-4ec1-93a6-7ede9b782808-internal-tls-certs\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.871611 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e76970f-0d47-4ec1-93a6-7ede9b782808-credential-keys\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.872768 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e76970f-0d47-4ec1-93a6-7ede9b782808-public-tls-certs\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.873868 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e76970f-0d47-4ec1-93a6-7ede9b782808-config-data\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.886051 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r84c6\" (UniqueName: \"kubernetes.io/projected/4e76970f-0d47-4ec1-93a6-7ede9b782808-kube-api-access-r84c6\") pod \"keystone-777f5984bd-7smjx\" (UID: \"4e76970f-0d47-4ec1-93a6-7ede9b782808\") " pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.947580 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5788c966cf-j7w8t" event={"ID":"497a9259-1ec1-47b8-b93e-820de80b527a","Type":"ContainerStarted","Data":"022ec30ce77530b695a42cea45b3d91ba833b59e215b72c3e6b7f893fb2f94b7"} Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.949576 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:12:57 crc kubenswrapper[4693]: I1212 16:12:57.950863 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czvhl\" (UniqueName: \"kubernetes.io/projected/7cc41f5a-262d-4513-b064-c12dfea1625b-kube-api-access-czvhl\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:57.999572 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe4e1190-a9f2-4010-98d3-a41898274b56","Type":"ContainerStarted","Data":"f36f503c49816caa9ecee74f12ef1f56d425aa50c6df41362486cc3bf5a9f169"} Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.022524 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5788c966cf-j7w8t" podStartSLOduration=10.022505257 podStartE2EDuration="10.022505257s" podCreationTimestamp="2025-12-12 16:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:12:57.98509021 +0000 UTC m=+1605.153729811" watchObservedRunningTime="2025-12-12 16:12:58.022505257 +0000 UTC m=+1605.191144858" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.028659 4693 scope.go:117] "RemoveContainer" containerID="f20310758fe4242a78e4ad9f6fd236ff2aecf7b25845f19285ce33e9de67030f" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.036922 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pxq6t" event={"ID":"1f6afc80-5a96-44ee-98c0-89a474913867","Type":"ContainerStarted","Data":"145b98f988ff459d044a19e450de0cc2029fd1ca3616f4857f7cb6ec6284f26b"} Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.040583 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ad919c71-0957-489d-8ae6-a69c33ab65b5","Type":"ContainerStarted","Data":"97b1668bc7bb34b733b72da4cff9f132dd4841e00ecdb107aac797c0f28f2590"} Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.112228 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-pxq6t" podStartSLOduration=4.380155824 podStartE2EDuration="53.11220918s" podCreationTimestamp="2025-12-12 16:12:05 +0000 UTC" firstStartedPulling="2025-12-12 16:12:07.677471332 +0000 UTC m=+1554.846110933" lastFinishedPulling="2025-12-12 16:12:56.409524688 +0000 UTC m=+1603.578164289" observedRunningTime="2025-12-12 16:12:58.064011503 +0000 UTC m=+1605.232651104" watchObservedRunningTime="2025-12-12 16:12:58.11220918 +0000 UTC m=+1605.280848781" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.171155 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.211823 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d" (OuterVolumeSpecName: "glance") pod "7cc41f5a-262d-4513-b064-c12dfea1625b" (UID: "7cc41f5a-262d-4513-b064-c12dfea1625b"). InnerVolumeSpecName "pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.239086 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc41f5a-262d-4513-b064-c12dfea1625b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cc41f5a-262d-4513-b064-c12dfea1625b" (UID: "7cc41f5a-262d-4513-b064-c12dfea1625b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.272509 4693 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\") on node \"crc\" " Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.272548 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc41f5a-262d-4513-b064-c12dfea1625b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.321171 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc41f5a-262d-4513-b064-c12dfea1625b-config-data" (OuterVolumeSpecName: "config-data") pod "7cc41f5a-262d-4513-b064-c12dfea1625b" (UID: "7cc41f5a-262d-4513-b064-c12dfea1625b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.329099 4693 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.329739 4693 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d") on node "crc" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.374535 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc41f5a-262d-4513-b064-c12dfea1625b-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.375170 4693 reconciler_common.go:293] "Volume detached for volume \"pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.518096 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.539447 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.552696 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.568962 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 16:12:58 crc kubenswrapper[4693]: E1212 16:12:58.569452 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cc41f5a-262d-4513-b064-c12dfea1625b" containerName="glance-log" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.569467 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cc41f5a-262d-4513-b064-c12dfea1625b" containerName="glance-log" Dec 12 16:12:58 crc kubenswrapper[4693]: E1212 16:12:58.569495 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cc41f5a-262d-4513-b064-c12dfea1625b" containerName="glance-httpd" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.569504 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cc41f5a-262d-4513-b064-c12dfea1625b" containerName="glance-httpd" Dec 12 16:12:58 crc kubenswrapper[4693]: E1212 16:12:58.569524 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6ab82c-1fed-49ec-b76a-ba30dd458f26" containerName="init" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.569531 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6ab82c-1fed-49ec-b76a-ba30dd458f26" containerName="init" Dec 12 16:12:58 crc kubenswrapper[4693]: E1212 16:12:58.569546 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6ab82c-1fed-49ec-b76a-ba30dd458f26" containerName="dnsmasq-dns" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.569555 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6ab82c-1fed-49ec-b76a-ba30dd458f26" containerName="dnsmasq-dns" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.569769 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d6ab82c-1fed-49ec-b76a-ba30dd458f26" containerName="dnsmasq-dns" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.569788 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cc41f5a-262d-4513-b064-c12dfea1625b" containerName="glance-log" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.569816 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cc41f5a-262d-4513-b064-c12dfea1625b" containerName="glance-httpd" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.571251 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.574103 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.574572 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.593086 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.684987 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-ovsdbserver-nb\") pod \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.685707 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-dns-swift-storage-0\") pod \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.691903 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-79886c984d-rtppc"] Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.693793 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-ovsdbserver-sb\") pod \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.693875 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-config\") pod \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.693954 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-dns-svc\") pod \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.694083 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm5lq\" (UniqueName: \"kubernetes.io/projected/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-kube-api-access-wm5lq\") pod \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\" (UID: \"3d6ab82c-1fed-49ec-b76a-ba30dd458f26\") " Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.694695 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lncvz\" (UniqueName: \"kubernetes.io/projected/7cdcade9-a317-47ba-a03c-0c355c06e306-kube-api-access-lncvz\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.694809 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.694847 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-config-data\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.694939 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-scripts\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.695028 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cdcade9-a317-47ba-a03c-0c355c06e306-logs\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.695103 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.695218 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cdcade9-a317-47ba-a03c-0c355c06e306-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.695473 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.711507 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-kube-api-access-wm5lq" (OuterVolumeSpecName: "kube-api-access-wm5lq") pod "3d6ab82c-1fed-49ec-b76a-ba30dd458f26" (UID: "3d6ab82c-1fed-49ec-b76a-ba30dd458f26"). InnerVolumeSpecName "kube-api-access-wm5lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.798889 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.799021 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lncvz\" (UniqueName: \"kubernetes.io/projected/7cdcade9-a317-47ba-a03c-0c355c06e306-kube-api-access-lncvz\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.799071 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.799097 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-config-data\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.799145 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-scripts\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.799192 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cdcade9-a317-47ba-a03c-0c355c06e306-logs\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.799240 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.808137 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cdcade9-a317-47ba-a03c-0c355c06e306-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.808475 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm5lq\" (UniqueName: \"kubernetes.io/projected/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-kube-api-access-wm5lq\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.808544 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cdcade9-a317-47ba-a03c-0c355c06e306-logs\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.808767 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cdcade9-a317-47ba-a03c-0c355c06e306-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.812214 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.812246 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/45099b66416d1862b963c1d5cb75f06870617309ef888c0d9e553da7cdf42994/globalmount\"" pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.822801 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3d6ab82c-1fed-49ec-b76a-ba30dd458f26" (UID: "3d6ab82c-1fed-49ec-b76a-ba30dd458f26"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.825214 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.826377 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.827152 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-scripts\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.829415 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-config-data\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.851122 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lncvz\" (UniqueName: \"kubernetes.io/projected/7cdcade9-a317-47ba-a03c-0c355c06e306-kube-api-access-lncvz\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:58 crc kubenswrapper[4693]: I1212 16:12:58.915748 4693 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:58.999688 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3d6ab82c-1fed-49ec-b76a-ba30dd458f26" (UID: "3d6ab82c-1fed-49ec-b76a-ba30dd458f26"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:59.017349 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:59.040458 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3d6ab82c-1fed-49ec-b76a-ba30dd458f26" (UID: "3d6ab82c-1fed-49ec-b76a-ba30dd458f26"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:59.050533 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3d6ab82c-1fed-49ec-b76a-ba30dd458f26" (UID: "3d6ab82c-1fed-49ec-b76a-ba30dd458f26"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:59.073120 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-777f5984bd-7smjx"] Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:59.119264 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:59.119326 4693 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:59.121567 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-config" (OuterVolumeSpecName: "config") pod "3d6ab82c-1fed-49ec-b76a-ba30dd458f26" (UID: "3d6ab82c-1fed-49ec-b76a-ba30dd458f26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:59.126822 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\") pod \"glance-default-external-api-0\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " pod="openstack/glance-default-external-api-0" Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:59.154225 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ht5v" event={"ID":"f907e8ff-aa6a-44c2-a4ca-d73203442782","Type":"ContainerStarted","Data":"db6bcd92bc3410aab1488d58bf876f168ced28f395e21ab267e1379fd780d145"} Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:59.176925 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79886c984d-rtppc" event={"ID":"93ac9f81-903f-43c7-9846-c4732f2fecea","Type":"ContainerStarted","Data":"90c48f0fc500c521989b8b7410ee90513082b9f1d98a3e89c4c59e4a63772ede"} Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:59.183518 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7ht5v" podStartSLOduration=34.010233391 podStartE2EDuration="44.183496918s" podCreationTimestamp="2025-12-12 16:12:15 +0000 UTC" firstStartedPulling="2025-12-12 16:12:46.234704559 +0000 UTC m=+1593.403344160" lastFinishedPulling="2025-12-12 16:12:56.407968086 +0000 UTC m=+1603.576607687" observedRunningTime="2025-12-12 16:12:59.182255424 +0000 UTC m=+1606.350895045" watchObservedRunningTime="2025-12-12 16:12:59.183496918 +0000 UTC m=+1606.352136519" Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:59.191049 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-wrwhd" event={"ID":"f9bc2e28-21b6-42e1-a680-92426ae37ecf","Type":"ContainerStarted","Data":"13000f0f370c49923d3e6b02013db9c3642e15b1748956210a6a375d579eb6cb"} Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:59.200033 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:59.200366 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-chngd" event={"ID":"3d6ab82c-1fed-49ec-b76a-ba30dd458f26","Type":"ContainerDied","Data":"e7a75ae185cca5c1c0b4696f0b7795203edc8b43613ec84c75b70adf0617ba62"} Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:59.200428 4693 scope.go:117] "RemoveContainer" containerID="38760689a66bed25a0cffc2c3c10fac796a5863301b3c8867251d48b3ebec7c8" Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:59.216369 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:59.221149 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d6ab82c-1fed-49ec-b76a-ba30dd458f26-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:59.226397 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-wrwhd" podStartSLOduration=5.258352367 podStartE2EDuration="54.226372911s" podCreationTimestamp="2025-12-12 16:12:05 +0000 UTC" firstStartedPulling="2025-12-12 16:12:07.443533619 +0000 UTC m=+1554.612173220" lastFinishedPulling="2025-12-12 16:12:56.411554163 +0000 UTC m=+1603.580193764" observedRunningTime="2025-12-12 16:12:59.221088039 +0000 UTC m=+1606.389727640" watchObservedRunningTime="2025-12-12 16:12:59.226372911 +0000 UTC m=+1606.395012512" Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:59.388584 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cc41f5a-262d-4513-b064-c12dfea1625b" path="/var/lib/kubelet/pods/7cc41f5a-262d-4513-b064-c12dfea1625b/volumes" Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:59.461123 4693 scope.go:117] "RemoveContainer" containerID="daa05e1fc09891c0fc8aef47f3c06bf8eb018ddd5f4560eed7b024f7c7bfac88" Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:59.513056 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-chngd"] Dec 12 16:12:59 crc kubenswrapper[4693]: I1212 16:12:59.576581 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-chngd"] Dec 12 16:13:00 crc kubenswrapper[4693]: I1212 16:13:00.235913 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79886c984d-rtppc" event={"ID":"93ac9f81-903f-43c7-9846-c4732f2fecea","Type":"ContainerStarted","Data":"9f94fc81bd252ff31bc457cc2242c1df36ad3a63de18e37a4d90ddebaf03cf4e"} Dec 12 16:13:00 crc kubenswrapper[4693]: I1212 16:13:00.244255 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6fb94" event={"ID":"42ae7c15-9f4d-4ef8-83d7-279226e74846","Type":"ContainerStarted","Data":"f9aa529b375e788b21d532f313443a601c90560e7b297c7e6c3a53ca6c35874c"} Dec 12 16:13:00 crc kubenswrapper[4693]: I1212 16:13:00.264393 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-6fb94" podStartSLOduration=5.09453431 podStartE2EDuration="55.264375262s" podCreationTimestamp="2025-12-12 16:12:05 +0000 UTC" firstStartedPulling="2025-12-12 16:12:07.264711509 +0000 UTC m=+1554.433351110" lastFinishedPulling="2025-12-12 16:12:57.434552471 +0000 UTC m=+1604.603192062" observedRunningTime="2025-12-12 16:13:00.261827284 +0000 UTC m=+1607.430466885" watchObservedRunningTime="2025-12-12 16:13:00.264375262 +0000 UTC m=+1607.433014863" Dec 12 16:13:00 crc kubenswrapper[4693]: I1212 16:13:00.284028 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-777f5984bd-7smjx" event={"ID":"4e76970f-0d47-4ec1-93a6-7ede9b782808","Type":"ContainerStarted","Data":"5efddf9fac3ee8f92330a522e3e07b4b9993af57e8548038a0a123b88ac2f8eb"} Dec 12 16:13:00 crc kubenswrapper[4693]: I1212 16:13:00.284348 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:13:00 crc kubenswrapper[4693]: I1212 16:13:00.284857 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-777f5984bd-7smjx" event={"ID":"4e76970f-0d47-4ec1-93a6-7ede9b782808","Type":"ContainerStarted","Data":"eb797acdde1e7e02850c6f04e8bd17b71a6f834c1d39b667b4bab7d8c4313539"} Dec 12 16:13:00 crc kubenswrapper[4693]: I1212 16:13:00.318353 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ad919c71-0957-489d-8ae6-a69c33ab65b5","Type":"ContainerStarted","Data":"28cff692fc6b5e1d81fb23e9cf6c98205139fbbf7d10adf5042d2792df004e4b"} Dec 12 16:13:00 crc kubenswrapper[4693]: I1212 16:13:00.337345 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 16:13:00 crc kubenswrapper[4693]: I1212 16:13:00.404456 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-777f5984bd-7smjx" podStartSLOduration=3.40442291 podStartE2EDuration="3.40442291s" podCreationTimestamp="2025-12-12 16:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:13:00.307956355 +0000 UTC m=+1607.476595956" watchObservedRunningTime="2025-12-12 16:13:00.40442291 +0000 UTC m=+1607.573062511" Dec 12 16:13:01 crc kubenswrapper[4693]: I1212 16:13:01.347530 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ad919c71-0957-489d-8ae6-a69c33ab65b5","Type":"ContainerStarted","Data":"43597875dcb20f9421d48d134653503b2bd790db4ff608b14535d56ad43af9ea"} Dec 12 16:13:01 crc kubenswrapper[4693]: I1212 16:13:01.381064 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.381045141 podStartE2EDuration="11.381045141s" podCreationTimestamp="2025-12-12 16:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:13:01.373674772 +0000 UTC m=+1608.542314373" watchObservedRunningTime="2025-12-12 16:13:01.381045141 +0000 UTC m=+1608.549684742" Dec 12 16:13:01 crc kubenswrapper[4693]: I1212 16:13:01.390103 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d6ab82c-1fed-49ec-b76a-ba30dd458f26" path="/var/lib/kubelet/pods/3d6ab82c-1fed-49ec-b76a-ba30dd458f26/volumes" Dec 12 16:13:01 crc kubenswrapper[4693]: I1212 16:13:01.390806 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-79886c984d-rtppc" Dec 12 16:13:01 crc kubenswrapper[4693]: I1212 16:13:01.390831 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-79886c984d-rtppc" Dec 12 16:13:01 crc kubenswrapper[4693]: I1212 16:13:01.390840 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79886c984d-rtppc" event={"ID":"93ac9f81-903f-43c7-9846-c4732f2fecea","Type":"ContainerStarted","Data":"598b6c4c5de36421389b778db6675df76dd83b2cbc87a863928dd583f75d8d6f"} Dec 12 16:13:01 crc kubenswrapper[4693]: I1212 16:13:01.390853 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7cdcade9-a317-47ba-a03c-0c355c06e306","Type":"ContainerStarted","Data":"7d53f444a4aa3c963d36827bd360b2e4dba9e9cdc56e80f695f2c5f884bb092a"} Dec 12 16:13:01 crc kubenswrapper[4693]: I1212 16:13:01.413806 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-79886c984d-rtppc" podStartSLOduration=4.413771661 podStartE2EDuration="4.413771661s" podCreationTimestamp="2025-12-12 16:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:13:01.40890148 +0000 UTC m=+1608.577541071" watchObservedRunningTime="2025-12-12 16:13:01.413771661 +0000 UTC m=+1608.582411262" Dec 12 16:13:02 crc kubenswrapper[4693]: I1212 16:13:02.384072 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7cdcade9-a317-47ba-a03c-0c355c06e306","Type":"ContainerStarted","Data":"204379f4d155ff5433acf100874cb455bd1186ef6cfeb539280d401f0ff54414"} Dec 12 16:13:03 crc kubenswrapper[4693]: I1212 16:13:03.423570 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7cdcade9-a317-47ba-a03c-0c355c06e306","Type":"ContainerStarted","Data":"02bd07d9e3f2b641563c77bf1d94bc6c73e2168b2c22db3470765a72db2401b8"} Dec 12 16:13:03 crc kubenswrapper[4693]: I1212 16:13:03.484714 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.484689727 podStartE2EDuration="5.484689727s" podCreationTimestamp="2025-12-12 16:12:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:13:03.471917004 +0000 UTC m=+1610.640556615" watchObservedRunningTime="2025-12-12 16:13:03.484689727 +0000 UTC m=+1610.653329328" Dec 12 16:13:05 crc kubenswrapper[4693]: I1212 16:13:05.770425 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7ht5v" Dec 12 16:13:05 crc kubenswrapper[4693]: I1212 16:13:05.770835 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7ht5v" Dec 12 16:13:05 crc kubenswrapper[4693]: I1212 16:13:05.828846 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7ht5v" Dec 12 16:13:06 crc kubenswrapper[4693]: I1212 16:13:06.459003 4693 generic.go:334] "Generic (PLEG): container finished" podID="1f6afc80-5a96-44ee-98c0-89a474913867" containerID="145b98f988ff459d044a19e450de0cc2029fd1ca3616f4857f7cb6ec6284f26b" exitCode=0 Dec 12 16:13:06 crc kubenswrapper[4693]: I1212 16:13:06.459074 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pxq6t" event={"ID":"1f6afc80-5a96-44ee-98c0-89a474913867","Type":"ContainerDied","Data":"145b98f988ff459d044a19e450de0cc2029fd1ca3616f4857f7cb6ec6284f26b"} Dec 12 16:13:06 crc kubenswrapper[4693]: I1212 16:13:06.520251 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7ht5v" Dec 12 16:13:06 crc kubenswrapper[4693]: I1212 16:13:06.575686 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7ht5v"] Dec 12 16:13:08 crc kubenswrapper[4693]: I1212 16:13:08.498014 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7ht5v" podUID="f907e8ff-aa6a-44c2-a4ca-d73203442782" containerName="registry-server" containerID="cri-o://db6bcd92bc3410aab1488d58bf876f168ced28f395e21ab267e1379fd780d145" gracePeriod=2 Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.115985 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pxq6t" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.219523 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.219905 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.247500 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6afc80-5a96-44ee-98c0-89a474913867-combined-ca-bundle\") pod \"1f6afc80-5a96-44ee-98c0-89a474913867\" (UID: \"1f6afc80-5a96-44ee-98c0-89a474913867\") " Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.247755 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f6afc80-5a96-44ee-98c0-89a474913867-db-sync-config-data\") pod \"1f6afc80-5a96-44ee-98c0-89a474913867\" (UID: \"1f6afc80-5a96-44ee-98c0-89a474913867\") " Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.247838 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk66h\" (UniqueName: \"kubernetes.io/projected/1f6afc80-5a96-44ee-98c0-89a474913867-kube-api-access-bk66h\") pod \"1f6afc80-5a96-44ee-98c0-89a474913867\" (UID: \"1f6afc80-5a96-44ee-98c0-89a474913867\") " Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.256471 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f6afc80-5a96-44ee-98c0-89a474913867-kube-api-access-bk66h" (OuterVolumeSpecName: "kube-api-access-bk66h") pod "1f6afc80-5a96-44ee-98c0-89a474913867" (UID: "1f6afc80-5a96-44ee-98c0-89a474913867"). InnerVolumeSpecName "kube-api-access-bk66h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.257013 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f6afc80-5a96-44ee-98c0-89a474913867-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1f6afc80-5a96-44ee-98c0-89a474913867" (UID: "1f6afc80-5a96-44ee-98c0-89a474913867"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.277765 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.278681 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.286549 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f6afc80-5a96-44ee-98c0-89a474913867-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f6afc80-5a96-44ee-98c0-89a474913867" (UID: "1f6afc80-5a96-44ee-98c0-89a474913867"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.350807 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6afc80-5a96-44ee-98c0-89a474913867-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.351128 4693 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f6afc80-5a96-44ee-98c0-89a474913867-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.351138 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk66h\" (UniqueName: \"kubernetes.io/projected/1f6afc80-5a96-44ee-98c0-89a474913867-kube-api-access-bk66h\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.469959 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ht5v" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.513914 4693 generic.go:334] "Generic (PLEG): container finished" podID="f907e8ff-aa6a-44c2-a4ca-d73203442782" containerID="db6bcd92bc3410aab1488d58bf876f168ced28f395e21ab267e1379fd780d145" exitCode=0 Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.514000 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ht5v" event={"ID":"f907e8ff-aa6a-44c2-a4ca-d73203442782","Type":"ContainerDied","Data":"db6bcd92bc3410aab1488d58bf876f168ced28f395e21ab267e1379fd780d145"} Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.514039 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ht5v" event={"ID":"f907e8ff-aa6a-44c2-a4ca-d73203442782","Type":"ContainerDied","Data":"cdb6836c7ae6f36a575d6b20371771ba557682427d178da350269e1ff779657e"} Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.514067 4693 scope.go:117] "RemoveContainer" containerID="db6bcd92bc3410aab1488d58bf876f168ced28f395e21ab267e1379fd780d145" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.514249 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ht5v" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.521402 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pxq6t" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.522774 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pxq6t" event={"ID":"1f6afc80-5a96-44ee-98c0-89a474913867","Type":"ContainerDied","Data":"77874625108a10fb1fc054ee4e671e40804e222a6821967e3b09c69e9e296e17"} Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.522831 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77874625108a10fb1fc054ee4e671e40804e222a6821967e3b09c69e9e296e17" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.522866 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.522889 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.543408 4693 scope.go:117] "RemoveContainer" containerID="9256f902816c8fa2b303feb5e738bcd41e4f91d80c3516c27b07f1c111ae7aed" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.554531 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hftc\" (UniqueName: \"kubernetes.io/projected/f907e8ff-aa6a-44c2-a4ca-d73203442782-kube-api-access-2hftc\") pod \"f907e8ff-aa6a-44c2-a4ca-d73203442782\" (UID: \"f907e8ff-aa6a-44c2-a4ca-d73203442782\") " Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.554672 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f907e8ff-aa6a-44c2-a4ca-d73203442782-catalog-content\") pod \"f907e8ff-aa6a-44c2-a4ca-d73203442782\" (UID: \"f907e8ff-aa6a-44c2-a4ca-d73203442782\") " Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.555152 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f907e8ff-aa6a-44c2-a4ca-d73203442782-utilities\") pod \"f907e8ff-aa6a-44c2-a4ca-d73203442782\" (UID: \"f907e8ff-aa6a-44c2-a4ca-d73203442782\") " Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.556085 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f907e8ff-aa6a-44c2-a4ca-d73203442782-utilities" (OuterVolumeSpecName: "utilities") pod "f907e8ff-aa6a-44c2-a4ca-d73203442782" (UID: "f907e8ff-aa6a-44c2-a4ca-d73203442782"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.556848 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f907e8ff-aa6a-44c2-a4ca-d73203442782-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.559944 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f907e8ff-aa6a-44c2-a4ca-d73203442782-kube-api-access-2hftc" (OuterVolumeSpecName: "kube-api-access-2hftc") pod "f907e8ff-aa6a-44c2-a4ca-d73203442782" (UID: "f907e8ff-aa6a-44c2-a4ca-d73203442782"). InnerVolumeSpecName "kube-api-access-2hftc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.575324 4693 scope.go:117] "RemoveContainer" containerID="1137ef1ce82c31ef921c3e4a996550c3fdf9f24077ba9efdc47d8224595543c2" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.608333 4693 scope.go:117] "RemoveContainer" containerID="db6bcd92bc3410aab1488d58bf876f168ced28f395e21ab267e1379fd780d145" Dec 12 16:13:09 crc kubenswrapper[4693]: E1212 16:13:09.609017 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db6bcd92bc3410aab1488d58bf876f168ced28f395e21ab267e1379fd780d145\": container with ID starting with db6bcd92bc3410aab1488d58bf876f168ced28f395e21ab267e1379fd780d145 not found: ID does not exist" containerID="db6bcd92bc3410aab1488d58bf876f168ced28f395e21ab267e1379fd780d145" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.609049 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db6bcd92bc3410aab1488d58bf876f168ced28f395e21ab267e1379fd780d145"} err="failed to get container status \"db6bcd92bc3410aab1488d58bf876f168ced28f395e21ab267e1379fd780d145\": rpc error: code = NotFound desc = could not find container \"db6bcd92bc3410aab1488d58bf876f168ced28f395e21ab267e1379fd780d145\": container with ID starting with db6bcd92bc3410aab1488d58bf876f168ced28f395e21ab267e1379fd780d145 not found: ID does not exist" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.609071 4693 scope.go:117] "RemoveContainer" containerID="9256f902816c8fa2b303feb5e738bcd41e4f91d80c3516c27b07f1c111ae7aed" Dec 12 16:13:09 crc kubenswrapper[4693]: E1212 16:13:09.609637 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9256f902816c8fa2b303feb5e738bcd41e4f91d80c3516c27b07f1c111ae7aed\": container with ID starting with 9256f902816c8fa2b303feb5e738bcd41e4f91d80c3516c27b07f1c111ae7aed not found: ID does not exist" containerID="9256f902816c8fa2b303feb5e738bcd41e4f91d80c3516c27b07f1c111ae7aed" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.609688 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9256f902816c8fa2b303feb5e738bcd41e4f91d80c3516c27b07f1c111ae7aed"} err="failed to get container status \"9256f902816c8fa2b303feb5e738bcd41e4f91d80c3516c27b07f1c111ae7aed\": rpc error: code = NotFound desc = could not find container \"9256f902816c8fa2b303feb5e738bcd41e4f91d80c3516c27b07f1c111ae7aed\": container with ID starting with 9256f902816c8fa2b303feb5e738bcd41e4f91d80c3516c27b07f1c111ae7aed not found: ID does not exist" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.609710 4693 scope.go:117] "RemoveContainer" containerID="1137ef1ce82c31ef921c3e4a996550c3fdf9f24077ba9efdc47d8224595543c2" Dec 12 16:13:09 crc kubenswrapper[4693]: E1212 16:13:09.610083 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1137ef1ce82c31ef921c3e4a996550c3fdf9f24077ba9efdc47d8224595543c2\": container with ID starting with 1137ef1ce82c31ef921c3e4a996550c3fdf9f24077ba9efdc47d8224595543c2 not found: ID does not exist" containerID="1137ef1ce82c31ef921c3e4a996550c3fdf9f24077ba9efdc47d8224595543c2" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.610108 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1137ef1ce82c31ef921c3e4a996550c3fdf9f24077ba9efdc47d8224595543c2"} err="failed to get container status \"1137ef1ce82c31ef921c3e4a996550c3fdf9f24077ba9efdc47d8224595543c2\": rpc error: code = NotFound desc = could not find container \"1137ef1ce82c31ef921c3e4a996550c3fdf9f24077ba9efdc47d8224595543c2\": container with ID starting with 1137ef1ce82c31ef921c3e4a996550c3fdf9f24077ba9efdc47d8224595543c2 not found: ID does not exist" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.618137 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f907e8ff-aa6a-44c2-a4ca-d73203442782-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f907e8ff-aa6a-44c2-a4ca-d73203442782" (UID: "f907e8ff-aa6a-44c2-a4ca-d73203442782"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.658804 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hftc\" (UniqueName: \"kubernetes.io/projected/f907e8ff-aa6a-44c2-a4ca-d73203442782-kube-api-access-2hftc\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.658835 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f907e8ff-aa6a-44c2-a4ca-d73203442782-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:09 crc kubenswrapper[4693]: E1212 16:13:09.767819 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="fe4e1190-a9f2-4010-98d3-a41898274b56" Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.861577 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7ht5v"] Dec 12 16:13:09 crc kubenswrapper[4693]: I1212 16:13:09.874198 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7ht5v"] Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.412061 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7b4845df5d-lqsj6"] Dec 12 16:13:10 crc kubenswrapper[4693]: E1212 16:13:10.412970 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f907e8ff-aa6a-44c2-a4ca-d73203442782" containerName="extract-utilities" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.412998 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f907e8ff-aa6a-44c2-a4ca-d73203442782" containerName="extract-utilities" Dec 12 16:13:10 crc kubenswrapper[4693]: E1212 16:13:10.413070 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f907e8ff-aa6a-44c2-a4ca-d73203442782" containerName="registry-server" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.413088 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f907e8ff-aa6a-44c2-a4ca-d73203442782" containerName="registry-server" Dec 12 16:13:10 crc kubenswrapper[4693]: E1212 16:13:10.413109 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6afc80-5a96-44ee-98c0-89a474913867" containerName="barbican-db-sync" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.413119 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6afc80-5a96-44ee-98c0-89a474913867" containerName="barbican-db-sync" Dec 12 16:13:10 crc kubenswrapper[4693]: E1212 16:13:10.413145 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f907e8ff-aa6a-44c2-a4ca-d73203442782" containerName="extract-content" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.413154 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f907e8ff-aa6a-44c2-a4ca-d73203442782" containerName="extract-content" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.413456 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="f907e8ff-aa6a-44c2-a4ca-d73203442782" containerName="registry-server" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.413493 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f6afc80-5a96-44ee-98c0-89a474913867" containerName="barbican-db-sync" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.414996 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b4845df5d-lqsj6" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.422380 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-96f49" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.422631 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.422777 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.436806 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7b4845df5d-lqsj6"] Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.465245 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-54b848945f-96sqb"] Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.470213 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-54b848945f-96sqb" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.474256 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.477744 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fd992ef-8562-4a7a-ac8a-549c71393137-logs\") pod \"barbican-keystone-listener-7b4845df5d-lqsj6\" (UID: \"0fd992ef-8562-4a7a-ac8a-549c71393137\") " pod="openstack/barbican-keystone-listener-7b4845df5d-lqsj6" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.478025 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fd992ef-8562-4a7a-ac8a-549c71393137-config-data\") pod \"barbican-keystone-listener-7b4845df5d-lqsj6\" (UID: \"0fd992ef-8562-4a7a-ac8a-549c71393137\") " pod="openstack/barbican-keystone-listener-7b4845df5d-lqsj6" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.478069 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0fd992ef-8562-4a7a-ac8a-549c71393137-config-data-custom\") pod \"barbican-keystone-listener-7b4845df5d-lqsj6\" (UID: \"0fd992ef-8562-4a7a-ac8a-549c71393137\") " pod="openstack/barbican-keystone-listener-7b4845df5d-lqsj6" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.478214 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fd992ef-8562-4a7a-ac8a-549c71393137-combined-ca-bundle\") pod \"barbican-keystone-listener-7b4845df5d-lqsj6\" (UID: \"0fd992ef-8562-4a7a-ac8a-549c71393137\") " pod="openstack/barbican-keystone-listener-7b4845df5d-lqsj6" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.478244 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6nmw\" (UniqueName: \"kubernetes.io/projected/0fd992ef-8562-4a7a-ac8a-549c71393137-kube-api-access-m6nmw\") pod \"barbican-keystone-listener-7b4845df5d-lqsj6\" (UID: \"0fd992ef-8562-4a7a-ac8a-549c71393137\") " pod="openstack/barbican-keystone-listener-7b4845df5d-lqsj6" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.512970 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-54b848945f-96sqb"] Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.561625 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-v8pg5"] Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.583631 4693 generic.go:334] "Generic (PLEG): container finished" podID="f9bc2e28-21b6-42e1-a680-92426ae37ecf" containerID="13000f0f370c49923d3e6b02013db9c3642e15b1748956210a6a375d579eb6cb" exitCode=0 Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.591726 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-wrwhd" event={"ID":"f9bc2e28-21b6-42e1-a680-92426ae37ecf","Type":"ContainerDied","Data":"13000f0f370c49923d3e6b02013db9c3642e15b1748956210a6a375d579eb6cb"} Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.591977 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.593441 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fd992ef-8562-4a7a-ac8a-549c71393137-config-data\") pod \"barbican-keystone-listener-7b4845df5d-lqsj6\" (UID: \"0fd992ef-8562-4a7a-ac8a-549c71393137\") " pod="openstack/barbican-keystone-listener-7b4845df5d-lqsj6" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.593538 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0fd992ef-8562-4a7a-ac8a-549c71393137-config-data-custom\") pod \"barbican-keystone-listener-7b4845df5d-lqsj6\" (UID: \"0fd992ef-8562-4a7a-ac8a-549c71393137\") " pod="openstack/barbican-keystone-listener-7b4845df5d-lqsj6" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.593934 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fd992ef-8562-4a7a-ac8a-549c71393137-combined-ca-bundle\") pod \"barbican-keystone-listener-7b4845df5d-lqsj6\" (UID: \"0fd992ef-8562-4a7a-ac8a-549c71393137\") " pod="openstack/barbican-keystone-listener-7b4845df5d-lqsj6" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.593975 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6nmw\" (UniqueName: \"kubernetes.io/projected/0fd992ef-8562-4a7a-ac8a-549c71393137-kube-api-access-m6nmw\") pod \"barbican-keystone-listener-7b4845df5d-lqsj6\" (UID: \"0fd992ef-8562-4a7a-ac8a-549c71393137\") " pod="openstack/barbican-keystone-listener-7b4845df5d-lqsj6" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.594172 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fd992ef-8562-4a7a-ac8a-549c71393137-logs\") pod \"barbican-keystone-listener-7b4845df5d-lqsj6\" (UID: \"0fd992ef-8562-4a7a-ac8a-549c71393137\") " pod="openstack/barbican-keystone-listener-7b4845df5d-lqsj6" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.601324 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe4e1190-a9f2-4010-98d3-a41898274b56" containerName="ceilometer-notification-agent" containerID="cri-o://97ec1dfe60655abfc2cb4a1695a1e9c983e0eec6b8ae1292c1c9dc6f39857180" gracePeriod=30 Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.601713 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe4e1190-a9f2-4010-98d3-a41898274b56","Type":"ContainerStarted","Data":"8a784474fc4d2566a3c7f94d199432fc5b66b9857adfe0228ff878e391dcd2e2"} Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.601868 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.602235 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe4e1190-a9f2-4010-98d3-a41898274b56" containerName="sg-core" containerID="cri-o://f36f503c49816caa9ecee74f12ef1f56d425aa50c6df41362486cc3bf5a9f169" gracePeriod=30 Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.602337 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe4e1190-a9f2-4010-98d3-a41898274b56" containerName="proxy-httpd" containerID="cri-o://8a784474fc4d2566a3c7f94d199432fc5b66b9857adfe0228ff878e391dcd2e2" gracePeriod=30 Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.603647 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-v8pg5"] Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.603976 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fd992ef-8562-4a7a-ac8a-549c71393137-logs\") pod \"barbican-keystone-listener-7b4845df5d-lqsj6\" (UID: \"0fd992ef-8562-4a7a-ac8a-549c71393137\") " pod="openstack/barbican-keystone-listener-7b4845df5d-lqsj6" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.621150 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0fd992ef-8562-4a7a-ac8a-549c71393137-config-data-custom\") pod \"barbican-keystone-listener-7b4845df5d-lqsj6\" (UID: \"0fd992ef-8562-4a7a-ac8a-549c71393137\") " pod="openstack/barbican-keystone-listener-7b4845df5d-lqsj6" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.660258 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fd992ef-8562-4a7a-ac8a-549c71393137-combined-ca-bundle\") pod \"barbican-keystone-listener-7b4845df5d-lqsj6\" (UID: \"0fd992ef-8562-4a7a-ac8a-549c71393137\") " pod="openstack/barbican-keystone-listener-7b4845df5d-lqsj6" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.691878 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-566c474f44-2zhg2"] Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.697191 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.701302 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1d7e1b1-7750-4182-9df7-ff097defa8b7-logs\") pod \"barbican-worker-54b848945f-96sqb\" (UID: \"b1d7e1b1-7750-4182-9df7-ff097defa8b7\") " pod="openstack/barbican-worker-54b848945f-96sqb" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.701346 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-config\") pod \"dnsmasq-dns-85ff748b95-v8pg5\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.701394 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-v8pg5\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.701427 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-dns-svc\") pod \"dnsmasq-dns-85ff748b95-v8pg5\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.701452 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1d7e1b1-7750-4182-9df7-ff097defa8b7-config-data-custom\") pod \"barbican-worker-54b848945f-96sqb\" (UID: \"b1d7e1b1-7750-4182-9df7-ff097defa8b7\") " pod="openstack/barbican-worker-54b848945f-96sqb" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.701479 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-v8pg5\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.701506 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-v8pg5\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.701560 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh65s\" (UniqueName: \"kubernetes.io/projected/b1d7e1b1-7750-4182-9df7-ff097defa8b7-kube-api-access-dh65s\") pod \"barbican-worker-54b848945f-96sqb\" (UID: \"b1d7e1b1-7750-4182-9df7-ff097defa8b7\") " pod="openstack/barbican-worker-54b848945f-96sqb" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.701608 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgzsb\" (UniqueName: \"kubernetes.io/projected/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-kube-api-access-vgzsb\") pod \"dnsmasq-dns-85ff748b95-v8pg5\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.701679 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1d7e1b1-7750-4182-9df7-ff097defa8b7-config-data\") pod \"barbican-worker-54b848945f-96sqb\" (UID: \"b1d7e1b1-7750-4182-9df7-ff097defa8b7\") " pod="openstack/barbican-worker-54b848945f-96sqb" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.701714 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1d7e1b1-7750-4182-9df7-ff097defa8b7-combined-ca-bundle\") pod \"barbican-worker-54b848945f-96sqb\" (UID: \"b1d7e1b1-7750-4182-9df7-ff097defa8b7\") " pod="openstack/barbican-worker-54b848945f-96sqb" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.703545 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.709956 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6nmw\" (UniqueName: \"kubernetes.io/projected/0fd992ef-8562-4a7a-ac8a-549c71393137-kube-api-access-m6nmw\") pod \"barbican-keystone-listener-7b4845df5d-lqsj6\" (UID: \"0fd992ef-8562-4a7a-ac8a-549c71393137\") " pod="openstack/barbican-keystone-listener-7b4845df5d-lqsj6" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.723355 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fd992ef-8562-4a7a-ac8a-549c71393137-config-data\") pod \"barbican-keystone-listener-7b4845df5d-lqsj6\" (UID: \"0fd992ef-8562-4a7a-ac8a-549c71393137\") " pod="openstack/barbican-keystone-listener-7b4845df5d-lqsj6" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.755083 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-566c474f44-2zhg2"] Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.781600 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b4845df5d-lqsj6" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.864852 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1d7e1b1-7750-4182-9df7-ff097defa8b7-logs\") pod \"barbican-worker-54b848945f-96sqb\" (UID: \"b1d7e1b1-7750-4182-9df7-ff097defa8b7\") " pod="openstack/barbican-worker-54b848945f-96sqb" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.864910 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-config\") pod \"dnsmasq-dns-85ff748b95-v8pg5\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.864987 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-v8pg5\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.865008 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-dns-svc\") pod \"dnsmasq-dns-85ff748b95-v8pg5\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.865033 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1d7e1b1-7750-4182-9df7-ff097defa8b7-config-data-custom\") pod \"barbican-worker-54b848945f-96sqb\" (UID: \"b1d7e1b1-7750-4182-9df7-ff097defa8b7\") " pod="openstack/barbican-worker-54b848945f-96sqb" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.865053 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxvjm\" (UniqueName: \"kubernetes.io/projected/750f263e-745d-4b68-94ee-ab44877c8403-kube-api-access-gxvjm\") pod \"barbican-api-566c474f44-2zhg2\" (UID: \"750f263e-745d-4b68-94ee-ab44877c8403\") " pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.865084 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/750f263e-745d-4b68-94ee-ab44877c8403-config-data\") pod \"barbican-api-566c474f44-2zhg2\" (UID: \"750f263e-745d-4b68-94ee-ab44877c8403\") " pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.865107 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-v8pg5\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.865139 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-v8pg5\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.865173 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/750f263e-745d-4b68-94ee-ab44877c8403-logs\") pod \"barbican-api-566c474f44-2zhg2\" (UID: \"750f263e-745d-4b68-94ee-ab44877c8403\") " pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.865245 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh65s\" (UniqueName: \"kubernetes.io/projected/b1d7e1b1-7750-4182-9df7-ff097defa8b7-kube-api-access-dh65s\") pod \"barbican-worker-54b848945f-96sqb\" (UID: \"b1d7e1b1-7750-4182-9df7-ff097defa8b7\") " pod="openstack/barbican-worker-54b848945f-96sqb" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.865322 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgzsb\" (UniqueName: \"kubernetes.io/projected/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-kube-api-access-vgzsb\") pod \"dnsmasq-dns-85ff748b95-v8pg5\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.878543 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-config\") pod \"dnsmasq-dns-85ff748b95-v8pg5\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.891533 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-v8pg5\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.892350 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-dns-svc\") pod \"dnsmasq-dns-85ff748b95-v8pg5\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.899438 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-v8pg5\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.900025 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-v8pg5\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.905423 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/750f263e-745d-4b68-94ee-ab44877c8403-combined-ca-bundle\") pod \"barbican-api-566c474f44-2zhg2\" (UID: \"750f263e-745d-4b68-94ee-ab44877c8403\") " pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.905475 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1d7e1b1-7750-4182-9df7-ff097defa8b7-config-data\") pod \"barbican-worker-54b848945f-96sqb\" (UID: \"b1d7e1b1-7750-4182-9df7-ff097defa8b7\") " pod="openstack/barbican-worker-54b848945f-96sqb" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.905510 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1d7e1b1-7750-4182-9df7-ff097defa8b7-combined-ca-bundle\") pod \"barbican-worker-54b848945f-96sqb\" (UID: \"b1d7e1b1-7750-4182-9df7-ff097defa8b7\") " pod="openstack/barbican-worker-54b848945f-96sqb" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.905604 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/750f263e-745d-4b68-94ee-ab44877c8403-config-data-custom\") pod \"barbican-api-566c474f44-2zhg2\" (UID: \"750f263e-745d-4b68-94ee-ab44877c8403\") " pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.909751 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1d7e1b1-7750-4182-9df7-ff097defa8b7-logs\") pod \"barbican-worker-54b848945f-96sqb\" (UID: \"b1d7e1b1-7750-4182-9df7-ff097defa8b7\") " pod="openstack/barbican-worker-54b848945f-96sqb" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.940525 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1d7e1b1-7750-4182-9df7-ff097defa8b7-config-data\") pod \"barbican-worker-54b848945f-96sqb\" (UID: \"b1d7e1b1-7750-4182-9df7-ff097defa8b7\") " pod="openstack/barbican-worker-54b848945f-96sqb" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.958102 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1d7e1b1-7750-4182-9df7-ff097defa8b7-config-data-custom\") pod \"barbican-worker-54b848945f-96sqb\" (UID: \"b1d7e1b1-7750-4182-9df7-ff097defa8b7\") " pod="openstack/barbican-worker-54b848945f-96sqb" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.959127 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1d7e1b1-7750-4182-9df7-ff097defa8b7-combined-ca-bundle\") pod \"barbican-worker-54b848945f-96sqb\" (UID: \"b1d7e1b1-7750-4182-9df7-ff097defa8b7\") " pod="openstack/barbican-worker-54b848945f-96sqb" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.976230 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgzsb\" (UniqueName: \"kubernetes.io/projected/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-kube-api-access-vgzsb\") pod \"dnsmasq-dns-85ff748b95-v8pg5\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:10 crc kubenswrapper[4693]: I1212 16:13:10.977784 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:10.982888 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh65s\" (UniqueName: \"kubernetes.io/projected/b1d7e1b1-7750-4182-9df7-ff097defa8b7-kube-api-access-dh65s\") pod \"barbican-worker-54b848945f-96sqb\" (UID: \"b1d7e1b1-7750-4182-9df7-ff097defa8b7\") " pod="openstack/barbican-worker-54b848945f-96sqb" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.008167 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxvjm\" (UniqueName: \"kubernetes.io/projected/750f263e-745d-4b68-94ee-ab44877c8403-kube-api-access-gxvjm\") pod \"barbican-api-566c474f44-2zhg2\" (UID: \"750f263e-745d-4b68-94ee-ab44877c8403\") " pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.016953 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/750f263e-745d-4b68-94ee-ab44877c8403-config-data\") pod \"barbican-api-566c474f44-2zhg2\" (UID: \"750f263e-745d-4b68-94ee-ab44877c8403\") " pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.017026 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/750f263e-745d-4b68-94ee-ab44877c8403-logs\") pod \"barbican-api-566c474f44-2zhg2\" (UID: \"750f263e-745d-4b68-94ee-ab44877c8403\") " pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.017179 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/750f263e-745d-4b68-94ee-ab44877c8403-combined-ca-bundle\") pod \"barbican-api-566c474f44-2zhg2\" (UID: \"750f263e-745d-4b68-94ee-ab44877c8403\") " pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.029423 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/750f263e-745d-4b68-94ee-ab44877c8403-config-data-custom\") pod \"barbican-api-566c474f44-2zhg2\" (UID: \"750f263e-745d-4b68-94ee-ab44877c8403\") " pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.030511 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/750f263e-745d-4b68-94ee-ab44877c8403-logs\") pod \"barbican-api-566c474f44-2zhg2\" (UID: \"750f263e-745d-4b68-94ee-ab44877c8403\") " pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.037236 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/750f263e-745d-4b68-94ee-ab44877c8403-config-data-custom\") pod \"barbican-api-566c474f44-2zhg2\" (UID: \"750f263e-745d-4b68-94ee-ab44877c8403\") " pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.048928 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/750f263e-745d-4b68-94ee-ab44877c8403-combined-ca-bundle\") pod \"barbican-api-566c474f44-2zhg2\" (UID: \"750f263e-745d-4b68-94ee-ab44877c8403\") " pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.049029 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/750f263e-745d-4b68-94ee-ab44877c8403-config-data\") pod \"barbican-api-566c474f44-2zhg2\" (UID: \"750f263e-745d-4b68-94ee-ab44877c8403\") " pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.054416 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxvjm\" (UniqueName: \"kubernetes.io/projected/750f263e-745d-4b68-94ee-ab44877c8403-kube-api-access-gxvjm\") pod \"barbican-api-566c474f44-2zhg2\" (UID: \"750f263e-745d-4b68-94ee-ab44877c8403\") " pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.138476 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.145028 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-54b848945f-96sqb" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.231899 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.232117 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.319571 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.428141 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f907e8ff-aa6a-44c2-a4ca-d73203442782" path="/var/lib/kubelet/pods/f907e8ff-aa6a-44c2-a4ca-d73203442782/volumes" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.429970 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.615000 4693 generic.go:334] "Generic (PLEG): container finished" podID="fe4e1190-a9f2-4010-98d3-a41898274b56" containerID="f36f503c49816caa9ecee74f12ef1f56d425aa50c6df41362486cc3bf5a9f169" exitCode=2 Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.615158 4693 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.615170 4693 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.616325 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe4e1190-a9f2-4010-98d3-a41898274b56","Type":"ContainerDied","Data":"f36f503c49816caa9ecee74f12ef1f56d425aa50c6df41362486cc3bf5a9f169"} Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.616872 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.616893 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.835113 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7b4845df5d-lqsj6"] Dec 12 16:13:11 crc kubenswrapper[4693]: I1212 16:13:11.944343 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-566c474f44-2zhg2"] Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.264612 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-v8pg5"] Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.282010 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-54b848945f-96sqb"] Dec 12 16:13:12 crc kubenswrapper[4693]: W1212 16:13:12.304514 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1eb671f_5f5e_4bc1_a560_1ad1daa99569.slice/crio-94ba89576258dd847a1871d7ad2b7af1562549a285cc52bda7703d6aad54d97e WatchSource:0}: Error finding container 94ba89576258dd847a1871d7ad2b7af1562549a285cc52bda7703d6aad54d97e: Status 404 returned error can't find the container with id 94ba89576258dd847a1871d7ad2b7af1562549a285cc52bda7703d6aad54d97e Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.451964 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-wrwhd" Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.535861 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.535931 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.535987 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.537029 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67"} pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.537104 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" containerID="cri-o://dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" gracePeriod=600 Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.578933 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9bc2e28-21b6-42e1-a680-92426ae37ecf-config-data\") pod \"f9bc2e28-21b6-42e1-a680-92426ae37ecf\" (UID: \"f9bc2e28-21b6-42e1-a680-92426ae37ecf\") " Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.579409 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw8hp\" (UniqueName: \"kubernetes.io/projected/f9bc2e28-21b6-42e1-a680-92426ae37ecf-kube-api-access-gw8hp\") pod \"f9bc2e28-21b6-42e1-a680-92426ae37ecf\" (UID: \"f9bc2e28-21b6-42e1-a680-92426ae37ecf\") " Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.579456 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9bc2e28-21b6-42e1-a680-92426ae37ecf-combined-ca-bundle\") pod \"f9bc2e28-21b6-42e1-a680-92426ae37ecf\" (UID: \"f9bc2e28-21b6-42e1-a680-92426ae37ecf\") " Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.612801 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9bc2e28-21b6-42e1-a680-92426ae37ecf-kube-api-access-gw8hp" (OuterVolumeSpecName: "kube-api-access-gw8hp") pod "f9bc2e28-21b6-42e1-a680-92426ae37ecf" (UID: "f9bc2e28-21b6-42e1-a680-92426ae37ecf"). InnerVolumeSpecName "kube-api-access-gw8hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.641090 4693 generic.go:334] "Generic (PLEG): container finished" podID="fe4e1190-a9f2-4010-98d3-a41898274b56" containerID="8a784474fc4d2566a3c7f94d199432fc5b66b9857adfe0228ff878e391dcd2e2" exitCode=0 Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.641297 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe4e1190-a9f2-4010-98d3-a41898274b56","Type":"ContainerDied","Data":"8a784474fc4d2566a3c7f94d199432fc5b66b9857adfe0228ff878e391dcd2e2"} Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.646810 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" event={"ID":"e1eb671f-5f5e-4bc1-a560-1ad1daa99569","Type":"ContainerStarted","Data":"94ba89576258dd847a1871d7ad2b7af1562549a285cc52bda7703d6aad54d97e"} Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.650041 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-54b848945f-96sqb" event={"ID":"b1d7e1b1-7750-4182-9df7-ff097defa8b7","Type":"ContainerStarted","Data":"4233e687a6e64b2286fa9216165186c8a250f32162abe990557856b5ebf9de37"} Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.659662 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b4845df5d-lqsj6" event={"ID":"0fd992ef-8562-4a7a-ac8a-549c71393137","Type":"ContainerStarted","Data":"bcc949070a16e04a3786c503f27baaee29f56b637972ccc8b98ad4cbe83bcd3f"} Dec 12 16:13:12 crc kubenswrapper[4693]: E1212 16:13:12.676569 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.683900 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw8hp\" (UniqueName: \"kubernetes.io/projected/f9bc2e28-21b6-42e1-a680-92426ae37ecf-kube-api-access-gw8hp\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.693834 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9bc2e28-21b6-42e1-a680-92426ae37ecf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9bc2e28-21b6-42e1-a680-92426ae37ecf" (UID: "f9bc2e28-21b6-42e1-a680-92426ae37ecf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.695363 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-566c474f44-2zhg2" event={"ID":"750f263e-745d-4b68-94ee-ab44877c8403","Type":"ContainerStarted","Data":"06f64021df8724dbd0e9d56a98d9045bfc0aebe13eee918ee760735098831916"} Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.695431 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-566c474f44-2zhg2" event={"ID":"750f263e-745d-4b68-94ee-ab44877c8403","Type":"ContainerStarted","Data":"a2bda1e5d0cb30d7486645d15e884753da552693674cc42106df79e1664e0699"} Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.706798 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-wrwhd" Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.706894 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-wrwhd" event={"ID":"f9bc2e28-21b6-42e1-a680-92426ae37ecf","Type":"ContainerDied","Data":"54118d49fe238afddfd78f4c7c96f85b9da701a5911374f1687612cd4b4e521a"} Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.706929 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54118d49fe238afddfd78f4c7c96f85b9da701a5911374f1687612cd4b4e521a" Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.748935 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9bc2e28-21b6-42e1-a680-92426ae37ecf-config-data" (OuterVolumeSpecName: "config-data") pod "f9bc2e28-21b6-42e1-a680-92426ae37ecf" (UID: "f9bc2e28-21b6-42e1-a680-92426ae37ecf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.790394 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9bc2e28-21b6-42e1-a680-92426ae37ecf-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:12 crc kubenswrapper[4693]: I1212 16:13:12.790453 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9bc2e28-21b6-42e1-a680-92426ae37ecf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:13 crc kubenswrapper[4693]: I1212 16:13:13.785696 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-566c474f44-2zhg2" event={"ID":"750f263e-745d-4b68-94ee-ab44877c8403","Type":"ContainerStarted","Data":"1a9a39cf6ffa14412f47924f1fe8cb0356b4023014bd2fd6c250b6e73fc26234"} Dec 12 16:13:13 crc kubenswrapper[4693]: I1212 16:13:13.788887 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:13 crc kubenswrapper[4693]: I1212 16:13:13.788931 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:13 crc kubenswrapper[4693]: I1212 16:13:13.812855 4693 generic.go:334] "Generic (PLEG): container finished" podID="42ae7c15-9f4d-4ef8-83d7-279226e74846" containerID="f9aa529b375e788b21d532f313443a601c90560e7b297c7e6c3a53ca6c35874c" exitCode=0 Dec 12 16:13:13 crc kubenswrapper[4693]: I1212 16:13:13.819032 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6fb94" event={"ID":"42ae7c15-9f4d-4ef8-83d7-279226e74846","Type":"ContainerDied","Data":"f9aa529b375e788b21d532f313443a601c90560e7b297c7e6c3a53ca6c35874c"} Dec 12 16:13:13 crc kubenswrapper[4693]: I1212 16:13:13.841984 4693 generic.go:334] "Generic (PLEG): container finished" podID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" exitCode=0 Dec 12 16:13:13 crc kubenswrapper[4693]: I1212 16:13:13.842555 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerDied","Data":"dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67"} Dec 12 16:13:13 crc kubenswrapper[4693]: I1212 16:13:13.842606 4693 scope.go:117] "RemoveContainer" containerID="fa5a22453d813e4ad162e4fc8b28463dbad032801eec3a25e1c47d7ec02c9b9a" Dec 12 16:13:13 crc kubenswrapper[4693]: I1212 16:13:13.843890 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:13:13 crc kubenswrapper[4693]: E1212 16:13:13.844735 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:13:13 crc kubenswrapper[4693]: I1212 16:13:13.844848 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-566c474f44-2zhg2" podStartSLOduration=3.844828902 podStartE2EDuration="3.844828902s" podCreationTimestamp="2025-12-12 16:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:13:13.832349497 +0000 UTC m=+1621.000989098" watchObservedRunningTime="2025-12-12 16:13:13.844828902 +0000 UTC m=+1621.013468503" Dec 12 16:13:13 crc kubenswrapper[4693]: I1212 16:13:13.900431 4693 generic.go:334] "Generic (PLEG): container finished" podID="fe4e1190-a9f2-4010-98d3-a41898274b56" containerID="97ec1dfe60655abfc2cb4a1695a1e9c983e0eec6b8ae1292c1c9dc6f39857180" exitCode=0 Dec 12 16:13:13 crc kubenswrapper[4693]: I1212 16:13:13.900538 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe4e1190-a9f2-4010-98d3-a41898274b56","Type":"ContainerDied","Data":"97ec1dfe60655abfc2cb4a1695a1e9c983e0eec6b8ae1292c1c9dc6f39857180"} Dec 12 16:13:13 crc kubenswrapper[4693]: I1212 16:13:13.963812 4693 generic.go:334] "Generic (PLEG): container finished" podID="e1eb671f-5f5e-4bc1-a560-1ad1daa99569" containerID="d548fe4a5402457d58dd1cec67538a3a83a7988082cd5576756b9ac09e5bac2b" exitCode=0 Dec 12 16:13:13 crc kubenswrapper[4693]: I1212 16:13:13.964387 4693 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 16:13:13 crc kubenswrapper[4693]: I1212 16:13:13.964555 4693 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 16:13:13 crc kubenswrapper[4693]: I1212 16:13:13.967513 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" event={"ID":"e1eb671f-5f5e-4bc1-a560-1ad1daa99569","Type":"ContainerDied","Data":"d548fe4a5402457d58dd1cec67538a3a83a7988082cd5576756b9ac09e5bac2b"} Dec 12 16:13:14 crc kubenswrapper[4693]: I1212 16:13:14.319592 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 12 16:13:14 crc kubenswrapper[4693]: I1212 16:13:14.319742 4693 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 16:13:14 crc kubenswrapper[4693]: I1212 16:13:14.321586 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 12 16:13:14 crc kubenswrapper[4693]: I1212 16:13:14.953568 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.016788 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe4e1190-a9f2-4010-98d3-a41898274b56","Type":"ContainerDied","Data":"8473d01318b7d92b6e7b6cc05ca9f33540a7e1794d24fa153f961f0ab3020104"} Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.016941 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.098218 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-config-data\") pod \"fe4e1190-a9f2-4010-98d3-a41898274b56\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.098353 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-sg-core-conf-yaml\") pod \"fe4e1190-a9f2-4010-98d3-a41898274b56\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.098463 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-scripts\") pod \"fe4e1190-a9f2-4010-98d3-a41898274b56\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.098496 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe4e1190-a9f2-4010-98d3-a41898274b56-log-httpd\") pod \"fe4e1190-a9f2-4010-98d3-a41898274b56\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.098720 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe4e1190-a9f2-4010-98d3-a41898274b56-run-httpd\") pod \"fe4e1190-a9f2-4010-98d3-a41898274b56\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.098806 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn7tp\" (UniqueName: \"kubernetes.io/projected/fe4e1190-a9f2-4010-98d3-a41898274b56-kube-api-access-gn7tp\") pod \"fe4e1190-a9f2-4010-98d3-a41898274b56\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.098880 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-combined-ca-bundle\") pod \"fe4e1190-a9f2-4010-98d3-a41898274b56\" (UID: \"fe4e1190-a9f2-4010-98d3-a41898274b56\") " Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.099664 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe4e1190-a9f2-4010-98d3-a41898274b56-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fe4e1190-a9f2-4010-98d3-a41898274b56" (UID: "fe4e1190-a9f2-4010-98d3-a41898274b56"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.099702 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe4e1190-a9f2-4010-98d3-a41898274b56-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fe4e1190-a9f2-4010-98d3-a41898274b56" (UID: "fe4e1190-a9f2-4010-98d3-a41898274b56"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.118291 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-scripts" (OuterVolumeSpecName: "scripts") pod "fe4e1190-a9f2-4010-98d3-a41898274b56" (UID: "fe4e1190-a9f2-4010-98d3-a41898274b56"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.131134 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe4e1190-a9f2-4010-98d3-a41898274b56-kube-api-access-gn7tp" (OuterVolumeSpecName: "kube-api-access-gn7tp") pod "fe4e1190-a9f2-4010-98d3-a41898274b56" (UID: "fe4e1190-a9f2-4010-98d3-a41898274b56"). InnerVolumeSpecName "kube-api-access-gn7tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.214290 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.214321 4693 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe4e1190-a9f2-4010-98d3-a41898274b56-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.214333 4693 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe4e1190-a9f2-4010-98d3-a41898274b56-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.214348 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn7tp\" (UniqueName: \"kubernetes.io/projected/fe4e1190-a9f2-4010-98d3-a41898274b56-kube-api-access-gn7tp\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.221484 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fe4e1190-a9f2-4010-98d3-a41898274b56" (UID: "fe4e1190-a9f2-4010-98d3-a41898274b56"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.227763 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7485c9c4f8-fs9g8"] Dec 12 16:13:15 crc kubenswrapper[4693]: E1212 16:13:15.228265 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4e1190-a9f2-4010-98d3-a41898274b56" containerName="proxy-httpd" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.228434 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4e1190-a9f2-4010-98d3-a41898274b56" containerName="proxy-httpd" Dec 12 16:13:15 crc kubenswrapper[4693]: E1212 16:13:15.228455 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4e1190-a9f2-4010-98d3-a41898274b56" containerName="ceilometer-notification-agent" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.228464 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4e1190-a9f2-4010-98d3-a41898274b56" containerName="ceilometer-notification-agent" Dec 12 16:13:15 crc kubenswrapper[4693]: E1212 16:13:15.228479 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9bc2e28-21b6-42e1-a680-92426ae37ecf" containerName="heat-db-sync" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.228484 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9bc2e28-21b6-42e1-a680-92426ae37ecf" containerName="heat-db-sync" Dec 12 16:13:15 crc kubenswrapper[4693]: E1212 16:13:15.228516 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4e1190-a9f2-4010-98d3-a41898274b56" containerName="sg-core" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.228522 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4e1190-a9f2-4010-98d3-a41898274b56" containerName="sg-core" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.232599 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe4e1190-a9f2-4010-98d3-a41898274b56" containerName="ceilometer-notification-agent" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.232628 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9bc2e28-21b6-42e1-a680-92426ae37ecf" containerName="heat-db-sync" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.232647 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe4e1190-a9f2-4010-98d3-a41898274b56" containerName="sg-core" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.232667 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe4e1190-a9f2-4010-98d3-a41898274b56" containerName="proxy-httpd" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.253873 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.260736 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.260941 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.282196 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-config-data" (OuterVolumeSpecName: "config-data") pod "fe4e1190-a9f2-4010-98d3-a41898274b56" (UID: "fe4e1190-a9f2-4010-98d3-a41898274b56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.299252 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe4e1190-a9f2-4010-98d3-a41898274b56" (UID: "fe4e1190-a9f2-4010-98d3-a41898274b56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.304331 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7485c9c4f8-fs9g8"] Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.322536 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.322599 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.322612 4693 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe4e1190-a9f2-4010-98d3-a41898274b56-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.425554 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92423eeb-1920-4f8f-afa0-c14f365edc54-logs\") pod \"barbican-api-7485c9c4f8-fs9g8\" (UID: \"92423eeb-1920-4f8f-afa0-c14f365edc54\") " pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.425714 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92423eeb-1920-4f8f-afa0-c14f365edc54-internal-tls-certs\") pod \"barbican-api-7485c9c4f8-fs9g8\" (UID: \"92423eeb-1920-4f8f-afa0-c14f365edc54\") " pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.425744 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92423eeb-1920-4f8f-afa0-c14f365edc54-combined-ca-bundle\") pod \"barbican-api-7485c9c4f8-fs9g8\" (UID: \"92423eeb-1920-4f8f-afa0-c14f365edc54\") " pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.425807 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92423eeb-1920-4f8f-afa0-c14f365edc54-config-data\") pod \"barbican-api-7485c9c4f8-fs9g8\" (UID: \"92423eeb-1920-4f8f-afa0-c14f365edc54\") " pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.425832 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92423eeb-1920-4f8f-afa0-c14f365edc54-public-tls-certs\") pod \"barbican-api-7485c9c4f8-fs9g8\" (UID: \"92423eeb-1920-4f8f-afa0-c14f365edc54\") " pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.425869 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j787\" (UniqueName: \"kubernetes.io/projected/92423eeb-1920-4f8f-afa0-c14f365edc54-kube-api-access-8j787\") pod \"barbican-api-7485c9c4f8-fs9g8\" (UID: \"92423eeb-1920-4f8f-afa0-c14f365edc54\") " pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.425958 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92423eeb-1920-4f8f-afa0-c14f365edc54-config-data-custom\") pod \"barbican-api-7485c9c4f8-fs9g8\" (UID: \"92423eeb-1920-4f8f-afa0-c14f365edc54\") " pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.456222 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.497238 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.513798 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.516866 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.522101 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.522112 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.529097 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92423eeb-1920-4f8f-afa0-c14f365edc54-logs\") pod \"barbican-api-7485c9c4f8-fs9g8\" (UID: \"92423eeb-1920-4f8f-afa0-c14f365edc54\") " pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.529591 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92423eeb-1920-4f8f-afa0-c14f365edc54-internal-tls-certs\") pod \"barbican-api-7485c9c4f8-fs9g8\" (UID: \"92423eeb-1920-4f8f-afa0-c14f365edc54\") " pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.529927 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92423eeb-1920-4f8f-afa0-c14f365edc54-combined-ca-bundle\") pod \"barbican-api-7485c9c4f8-fs9g8\" (UID: \"92423eeb-1920-4f8f-afa0-c14f365edc54\") " pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.530086 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92423eeb-1920-4f8f-afa0-c14f365edc54-logs\") pod \"barbican-api-7485c9c4f8-fs9g8\" (UID: \"92423eeb-1920-4f8f-afa0-c14f365edc54\") " pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.530339 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92423eeb-1920-4f8f-afa0-c14f365edc54-config-data\") pod \"barbican-api-7485c9c4f8-fs9g8\" (UID: \"92423eeb-1920-4f8f-afa0-c14f365edc54\") " pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.530449 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92423eeb-1920-4f8f-afa0-c14f365edc54-public-tls-certs\") pod \"barbican-api-7485c9c4f8-fs9g8\" (UID: \"92423eeb-1920-4f8f-afa0-c14f365edc54\") " pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.530595 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j787\" (UniqueName: \"kubernetes.io/projected/92423eeb-1920-4f8f-afa0-c14f365edc54-kube-api-access-8j787\") pod \"barbican-api-7485c9c4f8-fs9g8\" (UID: \"92423eeb-1920-4f8f-afa0-c14f365edc54\") " pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.530770 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92423eeb-1920-4f8f-afa0-c14f365edc54-config-data-custom\") pod \"barbican-api-7485c9c4f8-fs9g8\" (UID: \"92423eeb-1920-4f8f-afa0-c14f365edc54\") " pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.537420 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92423eeb-1920-4f8f-afa0-c14f365edc54-combined-ca-bundle\") pod \"barbican-api-7485c9c4f8-fs9g8\" (UID: \"92423eeb-1920-4f8f-afa0-c14f365edc54\") " pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.546719 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.555802 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92423eeb-1920-4f8f-afa0-c14f365edc54-internal-tls-certs\") pod \"barbican-api-7485c9c4f8-fs9g8\" (UID: \"92423eeb-1920-4f8f-afa0-c14f365edc54\") " pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.559804 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92423eeb-1920-4f8f-afa0-c14f365edc54-public-tls-certs\") pod \"barbican-api-7485c9c4f8-fs9g8\" (UID: \"92423eeb-1920-4f8f-afa0-c14f365edc54\") " pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.561175 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92423eeb-1920-4f8f-afa0-c14f365edc54-config-data-custom\") pod \"barbican-api-7485c9c4f8-fs9g8\" (UID: \"92423eeb-1920-4f8f-afa0-c14f365edc54\") " pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.566195 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j787\" (UniqueName: \"kubernetes.io/projected/92423eeb-1920-4f8f-afa0-c14f365edc54-kube-api-access-8j787\") pod \"barbican-api-7485c9c4f8-fs9g8\" (UID: \"92423eeb-1920-4f8f-afa0-c14f365edc54\") " pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.567401 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92423eeb-1920-4f8f-afa0-c14f365edc54-config-data\") pod \"barbican-api-7485c9c4f8-fs9g8\" (UID: \"92423eeb-1920-4f8f-afa0-c14f365edc54\") " pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.616259 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.638895 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-config-data\") pod \"ceilometer-0\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.639352 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-log-httpd\") pod \"ceilometer-0\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.639400 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.639438 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2s6f\" (UniqueName: \"kubernetes.io/projected/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-kube-api-access-b2s6f\") pod \"ceilometer-0\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.639570 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-run-httpd\") pod \"ceilometer-0\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.639942 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-scripts\") pod \"ceilometer-0\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.640191 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.649548 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.649714 4693 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.695543 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.742610 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-config-data\") pod \"ceilometer-0\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.742688 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-log-httpd\") pod \"ceilometer-0\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.742714 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.742733 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2s6f\" (UniqueName: \"kubernetes.io/projected/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-kube-api-access-b2s6f\") pod \"ceilometer-0\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.742760 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-run-httpd\") pod \"ceilometer-0\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.742810 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-scripts\") pod \"ceilometer-0\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.742852 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.744048 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-log-httpd\") pod \"ceilometer-0\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.744194 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-run-httpd\") pod \"ceilometer-0\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.752949 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.753324 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-config-data\") pod \"ceilometer-0\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.757410 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.763477 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-scripts\") pod \"ceilometer-0\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.769409 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2s6f\" (UniqueName: \"kubernetes.io/projected/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-kube-api-access-b2s6f\") pod \"ceilometer-0\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " pod="openstack/ceilometer-0" Dec 12 16:13:15 crc kubenswrapper[4693]: I1212 16:13:15.863657 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:13:17 crc kubenswrapper[4693]: I1212 16:13:16.999794 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-68cb54fffb-t6qwm" Dec 12 16:13:17 crc kubenswrapper[4693]: I1212 16:13:17.373693 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe4e1190-a9f2-4010-98d3-a41898274b56" path="/var/lib/kubelet/pods/fe4e1190-a9f2-4010-98d3-a41898274b56/volumes" Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.016254 4693 scope.go:117] "RemoveContainer" containerID="8a784474fc4d2566a3c7f94d199432fc5b66b9857adfe0228ff878e391dcd2e2" Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.061255 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6fb94" event={"ID":"42ae7c15-9f4d-4ef8-83d7-279226e74846","Type":"ContainerDied","Data":"4c6fce7ebc56547dd970db95a8621da834cd714adaa40b66d480684634fef58d"} Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.061645 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c6fce7ebc56547dd970db95a8621da834cd714adaa40b66d480684634fef58d" Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.342061 4693 scope.go:117] "RemoveContainer" containerID="f36f503c49816caa9ecee74f12ef1f56d425aa50c6df41362486cc3bf5a9f169" Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.349489 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6fb94" Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.472524 4693 scope.go:117] "RemoveContainer" containerID="97ec1dfe60655abfc2cb4a1695a1e9c983e0eec6b8ae1292c1c9dc6f39857180" Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.567983 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-db-sync-config-data\") pod \"42ae7c15-9f4d-4ef8-83d7-279226e74846\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.568050 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv74m\" (UniqueName: \"kubernetes.io/projected/42ae7c15-9f4d-4ef8-83d7-279226e74846-kube-api-access-vv74m\") pod \"42ae7c15-9f4d-4ef8-83d7-279226e74846\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.568177 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42ae7c15-9f4d-4ef8-83d7-279226e74846-etc-machine-id\") pod \"42ae7c15-9f4d-4ef8-83d7-279226e74846\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.568328 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-config-data\") pod \"42ae7c15-9f4d-4ef8-83d7-279226e74846\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.568374 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-combined-ca-bundle\") pod \"42ae7c15-9f4d-4ef8-83d7-279226e74846\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.568429 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-scripts\") pod \"42ae7c15-9f4d-4ef8-83d7-279226e74846\" (UID: \"42ae7c15-9f4d-4ef8-83d7-279226e74846\") " Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.568824 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42ae7c15-9f4d-4ef8-83d7-279226e74846-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "42ae7c15-9f4d-4ef8-83d7-279226e74846" (UID: "42ae7c15-9f4d-4ef8-83d7-279226e74846"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.569002 4693 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42ae7c15-9f4d-4ef8-83d7-279226e74846-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.575886 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "42ae7c15-9f4d-4ef8-83d7-279226e74846" (UID: "42ae7c15-9f4d-4ef8-83d7-279226e74846"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.576483 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-scripts" (OuterVolumeSpecName: "scripts") pod "42ae7c15-9f4d-4ef8-83d7-279226e74846" (UID: "42ae7c15-9f4d-4ef8-83d7-279226e74846"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.578445 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ae7c15-9f4d-4ef8-83d7-279226e74846-kube-api-access-vv74m" (OuterVolumeSpecName: "kube-api-access-vv74m") pod "42ae7c15-9f4d-4ef8-83d7-279226e74846" (UID: "42ae7c15-9f4d-4ef8-83d7-279226e74846"). InnerVolumeSpecName "kube-api-access-vv74m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.624316 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:13:18 crc kubenswrapper[4693]: W1212 16:13:18.638464 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9b67354_eb2d_4ade_bc4a_096d9e0b9791.slice/crio-2e8b9ada4fc38cb9f1f4d40b6dc8248d81ab80efd419cde1f72e0780f3e0650a WatchSource:0}: Error finding container 2e8b9ada4fc38cb9f1f4d40b6dc8248d81ab80efd419cde1f72e0780f3e0650a: Status 404 returned error can't find the container with id 2e8b9ada4fc38cb9f1f4d40b6dc8248d81ab80efd419cde1f72e0780f3e0650a Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.638669 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42ae7c15-9f4d-4ef8-83d7-279226e74846" (UID: "42ae7c15-9f4d-4ef8-83d7-279226e74846"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.653252 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-config-data" (OuterVolumeSpecName: "config-data") pod "42ae7c15-9f4d-4ef8-83d7-279226e74846" (UID: "42ae7c15-9f4d-4ef8-83d7-279226e74846"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.673095 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.673126 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.673140 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.673150 4693 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42ae7c15-9f4d-4ef8-83d7-279226e74846-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.673159 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv74m\" (UniqueName: \"kubernetes.io/projected/42ae7c15-9f4d-4ef8-83d7-279226e74846-kube-api-access-vv74m\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:18 crc kubenswrapper[4693]: W1212 16:13:18.768897 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92423eeb_1920_4f8f_afa0_c14f365edc54.slice/crio-ea9e5938a98b09388cc797dbd85785b08b9eee425b0573db2b00483128c4b6a0 WatchSource:0}: Error finding container ea9e5938a98b09388cc797dbd85785b08b9eee425b0573db2b00483128c4b6a0: Status 404 returned error can't find the container with id ea9e5938a98b09388cc797dbd85785b08b9eee425b0573db2b00483128c4b6a0 Dec 12 16:13:18 crc kubenswrapper[4693]: I1212 16:13:18.774066 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7485c9c4f8-fs9g8"] Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.020607 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5788c966cf-j7w8t" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.161555 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7485c9c4f8-fs9g8" event={"ID":"92423eeb-1920-4f8f-afa0-c14f365edc54","Type":"ContainerStarted","Data":"baa83b41940db13a8a6f26aa3e5583f38caa494932bc6c7372e97af102e10083"} Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.161876 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7485c9c4f8-fs9g8" event={"ID":"92423eeb-1920-4f8f-afa0-c14f365edc54","Type":"ContainerStarted","Data":"ea9e5938a98b09388cc797dbd85785b08b9eee425b0573db2b00483128c4b6a0"} Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.213571 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" event={"ID":"e1eb671f-5f5e-4bc1-a560-1ad1daa99569","Type":"ContainerStarted","Data":"37807a2023a82b02918a6962e585a6b1a51a8ed2a2c71d208e67f4ee984001c4"} Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.214025 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.215485 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-68cb54fffb-t6qwm"] Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.215781 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-68cb54fffb-t6qwm" podUID="d5518acc-0be1-4b59-874d-61eeb018a534" containerName="neutron-api" containerID="cri-o://f0684416f63a3faafc53939611786d5ca688417f4c669ac6ab70fbc3c3d18802" gracePeriod=30 Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.215946 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-68cb54fffb-t6qwm" podUID="d5518acc-0be1-4b59-874d-61eeb018a534" containerName="neutron-httpd" containerID="cri-o://caa1e11e8c9cc286ab857822f3d6e99bf2856df54e087cb0fc6b52ceede8c666" gracePeriod=30 Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.261629 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-54b848945f-96sqb" event={"ID":"b1d7e1b1-7750-4182-9df7-ff097defa8b7","Type":"ContainerStarted","Data":"8c2faa47b3e56bd29dbb70f1d64661c50d89c95dee397622992e6414f042fea6"} Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.261681 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-54b848945f-96sqb" event={"ID":"b1d7e1b1-7750-4182-9df7-ff097defa8b7","Type":"ContainerStarted","Data":"38c2d7cd97b4ea4044cd348cf1ab06b290d1ede827d2cbed0f2b1afbf9791ee4"} Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.284359 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b4845df5d-lqsj6" event={"ID":"0fd992ef-8562-4a7a-ac8a-549c71393137","Type":"ContainerStarted","Data":"8beb8898b20b8ced1799c698b95d6cb9379c58cccbfffb1e6279acf0ba6d971e"} Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.285007 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b4845df5d-lqsj6" event={"ID":"0fd992ef-8562-4a7a-ac8a-549c71393137","Type":"ContainerStarted","Data":"ed0760ff6f4a36cd2f879c00892fd26081833ca948b809f9a1520f2ad3b6bb4e"} Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.301801 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" podStartSLOduration=9.301708751 podStartE2EDuration="9.301708751s" podCreationTimestamp="2025-12-12 16:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:13:19.247217846 +0000 UTC m=+1626.415857447" watchObservedRunningTime="2025-12-12 16:13:19.301708751 +0000 UTC m=+1626.470348362" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.314260 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6fb94" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.314339 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9b67354-eb2d-4ade-bc4a-096d9e0b9791","Type":"ContainerStarted","Data":"2e8b9ada4fc38cb9f1f4d40b6dc8248d81ab80efd419cde1f72e0780f3e0650a"} Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.328377 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-54b848945f-96sqb" podStartSLOduration=3.58637015 podStartE2EDuration="9.328346798s" podCreationTimestamp="2025-12-12 16:13:10 +0000 UTC" firstStartedPulling="2025-12-12 16:13:12.32018391 +0000 UTC m=+1619.488823501" lastFinishedPulling="2025-12-12 16:13:18.062160548 +0000 UTC m=+1625.230800149" observedRunningTime="2025-12-12 16:13:19.296714987 +0000 UTC m=+1626.465354598" watchObservedRunningTime="2025-12-12 16:13:19.328346798 +0000 UTC m=+1626.496986399" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.376193 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7b4845df5d-lqsj6" podStartSLOduration=3.147843904 podStartE2EDuration="9.376156674s" podCreationTimestamp="2025-12-12 16:13:10 +0000 UTC" firstStartedPulling="2025-12-12 16:13:11.833824417 +0000 UTC m=+1619.002464018" lastFinishedPulling="2025-12-12 16:13:18.062137187 +0000 UTC m=+1625.230776788" observedRunningTime="2025-12-12 16:13:19.334205086 +0000 UTC m=+1626.502844697" watchObservedRunningTime="2025-12-12 16:13:19.376156674 +0000 UTC m=+1626.544796275" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.802744 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 12 16:13:19 crc kubenswrapper[4693]: E1212 16:13:19.803581 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ae7c15-9f4d-4ef8-83d7-279226e74846" containerName="cinder-db-sync" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.803605 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ae7c15-9f4d-4ef8-83d7-279226e74846" containerName="cinder-db-sync" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.803846 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="42ae7c15-9f4d-4ef8-83d7-279226e74846" containerName="cinder-db-sync" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.805025 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.811738 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.811752 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-lh46s" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.811892 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.812014 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.830389 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.876660 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-config-data\") pod \"cinder-scheduler-0\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.876727 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.876766 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-scripts\") pod \"cinder-scheduler-0\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.876818 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mthct\" (UniqueName: \"kubernetes.io/projected/ea456678-a355-439f-8e5b-28c85dac7da6-kube-api-access-mthct\") pod \"cinder-scheduler-0\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.876854 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea456678-a355-439f-8e5b-28c85dac7da6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.876898 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.923946 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-v8pg5"] Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.981869 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-config-data\") pod \"cinder-scheduler-0\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.981937 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.981984 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-scripts\") pod \"cinder-scheduler-0\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.982043 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mthct\" (UniqueName: \"kubernetes.io/projected/ea456678-a355-439f-8e5b-28c85dac7da6-kube-api-access-mthct\") pod \"cinder-scheduler-0\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.982092 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea456678-a355-439f-8e5b-28c85dac7da6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.982145 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:19 crc kubenswrapper[4693]: I1212 16:13:19.992566 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea456678-a355-439f-8e5b-28c85dac7da6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.002181 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.008814 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-scripts\") pod \"cinder-scheduler-0\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.028962 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mthct\" (UniqueName: \"kubernetes.io/projected/ea456678-a355-439f-8e5b-28c85dac7da6-kube-api-access-mthct\") pod \"cinder-scheduler-0\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.034311 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-config-data\") pod \"cinder-scheduler-0\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.046047 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nw2pb"] Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.060172 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.069468 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.134757 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nw2pb"] Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.136044 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.202383 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-nw2pb\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.202437 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-nw2pb\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.202475 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-config\") pod \"dnsmasq-dns-5c9776ccc5-nw2pb\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.202515 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-nw2pb\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.202612 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vddx\" (UniqueName: \"kubernetes.io/projected/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-kube-api-access-8vddx\") pod \"dnsmasq-dns-5c9776ccc5-nw2pb\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.202675 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-nw2pb\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.237897 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.240340 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.244658 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.271313 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.326321 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1033146-d0fa-4659-9043-48fd5b1a5dee-logs\") pod \"cinder-api-0\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.326431 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-config-data-custom\") pod \"cinder-api-0\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.328358 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-nw2pb\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.328406 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-nw2pb\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.328427 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.328462 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-config\") pod \"dnsmasq-dns-5c9776ccc5-nw2pb\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.328486 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-scripts\") pod \"cinder-api-0\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.328517 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-config-data\") pod \"cinder-api-0\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.328549 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-nw2pb\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.328660 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1033146-d0fa-4659-9043-48fd5b1a5dee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.328684 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vddx\" (UniqueName: \"kubernetes.io/projected/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-kube-api-access-8vddx\") pod \"dnsmasq-dns-5c9776ccc5-nw2pb\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.328730 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mqbc\" (UniqueName: \"kubernetes.io/projected/a1033146-d0fa-4659-9043-48fd5b1a5dee-kube-api-access-6mqbc\") pod \"cinder-api-0\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.328770 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-nw2pb\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.337140 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-config\") pod \"dnsmasq-dns-5c9776ccc5-nw2pb\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.338988 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-nw2pb\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.339649 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-nw2pb\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.341779 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-nw2pb\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.344964 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-nw2pb\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.375004 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vddx\" (UniqueName: \"kubernetes.io/projected/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-kube-api-access-8vddx\") pod \"dnsmasq-dns-5c9776ccc5-nw2pb\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.382435 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.406467 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7485c9c4f8-fs9g8" event={"ID":"92423eeb-1920-4f8f-afa0-c14f365edc54","Type":"ContainerStarted","Data":"67f0ae656b1aab654ea066c8eae63dcb7a3c46f9feba8360ac2e9dcd864111de"} Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.406530 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.406556 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.430617 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1033146-d0fa-4659-9043-48fd5b1a5dee-logs\") pod \"cinder-api-0\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.430710 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-config-data-custom\") pod \"cinder-api-0\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.430764 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.430801 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-scripts\") pod \"cinder-api-0\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.430832 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-config-data\") pod \"cinder-api-0\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.430940 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1033146-d0fa-4659-9043-48fd5b1a5dee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.430992 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mqbc\" (UniqueName: \"kubernetes.io/projected/a1033146-d0fa-4659-9043-48fd5b1a5dee-kube-api-access-6mqbc\") pod \"cinder-api-0\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.431898 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7485c9c4f8-fs9g8" podStartSLOduration=5.431883033 podStartE2EDuration="5.431883033s" podCreationTimestamp="2025-12-12 16:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:13:20.42877218 +0000 UTC m=+1627.597411781" watchObservedRunningTime="2025-12-12 16:13:20.431883033 +0000 UTC m=+1627.600522634" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.436982 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1033146-d0fa-4659-9043-48fd5b1a5dee-logs\") pod \"cinder-api-0\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.437543 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1033146-d0fa-4659-9043-48fd5b1a5dee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.440535 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-config-data\") pod \"cinder-api-0\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.448941 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.451004 4693 generic.go:334] "Generic (PLEG): container finished" podID="d5518acc-0be1-4b59-874d-61eeb018a534" containerID="caa1e11e8c9cc286ab857822f3d6e99bf2856df54e087cb0fc6b52ceede8c666" exitCode=0 Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.451362 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68cb54fffb-t6qwm" event={"ID":"d5518acc-0be1-4b59-874d-61eeb018a534","Type":"ContainerDied","Data":"caa1e11e8c9cc286ab857822f3d6e99bf2856df54e087cb0fc6b52ceede8c666"} Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.468456 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-scripts\") pod \"cinder-api-0\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.468617 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-config-data-custom\") pod \"cinder-api-0\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.501613 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mqbc\" (UniqueName: \"kubernetes.io/projected/a1033146-d0fa-4659-9043-48fd5b1a5dee-kube-api-access-6mqbc\") pod \"cinder-api-0\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.761564 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 12 16:13:20 crc kubenswrapper[4693]: I1212 16:13:20.908402 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 12 16:13:21 crc kubenswrapper[4693]: I1212 16:13:21.131070 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-566c474f44-2zhg2" podUID="750f263e-745d-4b68-94ee-ab44877c8403" containerName="barbican-api-log" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:13:21 crc kubenswrapper[4693]: I1212 16:13:21.301932 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nw2pb"] Dec 12 16:13:21 crc kubenswrapper[4693]: I1212 16:13:21.542294 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" event={"ID":"e73400cc-c2d1-4e09-94b2-38ad2d5a3058","Type":"ContainerStarted","Data":"c593edfb17c6c8505024658c98f1ebf93850ca903ea476a2b8085bb6cb19f5e3"} Dec 12 16:13:21 crc kubenswrapper[4693]: I1212 16:13:21.544232 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 12 16:13:21 crc kubenswrapper[4693]: I1212 16:13:21.546938 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9b67354-eb2d-4ade-bc4a-096d9e0b9791","Type":"ContainerStarted","Data":"3167185dcb78e8afab1e2cc577722bddfcb001ac1bae3df9790b36700676e095"} Dec 12 16:13:21 crc kubenswrapper[4693]: I1212 16:13:21.547878 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" podUID="e1eb671f-5f5e-4bc1-a560-1ad1daa99569" containerName="dnsmasq-dns" containerID="cri-o://37807a2023a82b02918a6962e585a6b1a51a8ed2a2c71d208e67f4ee984001c4" gracePeriod=10 Dec 12 16:13:21 crc kubenswrapper[4693]: I1212 16:13:21.550421 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea456678-a355-439f-8e5b-28c85dac7da6","Type":"ContainerStarted","Data":"a6b6bee533cd26a47bb16a3f1fc607ecae040d70815ea57f805145eec67f6595"} Dec 12 16:13:21 crc kubenswrapper[4693]: W1212 16:13:21.624497 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1033146_d0fa_4659_9043_48fd5b1a5dee.slice/crio-2cc5595a60cfdd104d040b3f3c11022007b9f167fe4479594402d3fdaf99b247 WatchSource:0}: Error finding container 2cc5595a60cfdd104d040b3f3c11022007b9f167fe4479594402d3fdaf99b247: Status 404 returned error can't find the container with id 2cc5595a60cfdd104d040b3f3c11022007b9f167fe4479594402d3fdaf99b247 Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.559141 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.577304 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a1033146-d0fa-4659-9043-48fd5b1a5dee","Type":"ContainerStarted","Data":"2cc5595a60cfdd104d040b3f3c11022007b9f167fe4479594402d3fdaf99b247"} Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.578959 4693 generic.go:334] "Generic (PLEG): container finished" podID="e1eb671f-5f5e-4bc1-a560-1ad1daa99569" containerID="37807a2023a82b02918a6962e585a6b1a51a8ed2a2c71d208e67f4ee984001c4" exitCode=0 Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.579021 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" event={"ID":"e1eb671f-5f5e-4bc1-a560-1ad1daa99569","Type":"ContainerDied","Data":"37807a2023a82b02918a6962e585a6b1a51a8ed2a2c71d208e67f4ee984001c4"} Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.579050 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" event={"ID":"e1eb671f-5f5e-4bc1-a560-1ad1daa99569","Type":"ContainerDied","Data":"94ba89576258dd847a1871d7ad2b7af1562549a285cc52bda7703d6aad54d97e"} Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.579069 4693 scope.go:117] "RemoveContainer" containerID="37807a2023a82b02918a6962e585a6b1a51a8ed2a2c71d208e67f4ee984001c4" Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.579132 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-v8pg5" Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.581698 4693 generic.go:334] "Generic (PLEG): container finished" podID="e73400cc-c2d1-4e09-94b2-38ad2d5a3058" containerID="fd96b42dabb03d7aaed49b229b246e86583eca48821e58de5daf3b8361a64c36" exitCode=0 Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.581772 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" event={"ID":"e73400cc-c2d1-4e09-94b2-38ad2d5a3058","Type":"ContainerDied","Data":"fd96b42dabb03d7aaed49b229b246e86583eca48821e58de5daf3b8361a64c36"} Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.671153 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-dns-svc\") pod \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.671451 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-dns-swift-storage-0\") pod \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.671508 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-config\") pod \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.671781 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgzsb\" (UniqueName: \"kubernetes.io/projected/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-kube-api-access-vgzsb\") pod \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.671801 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-ovsdbserver-nb\") pod \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.671818 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-ovsdbserver-sb\") pod \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\" (UID: \"e1eb671f-5f5e-4bc1-a560-1ad1daa99569\") " Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.683284 4693 scope.go:117] "RemoveContainer" containerID="d548fe4a5402457d58dd1cec67538a3a83a7988082cd5576756b9ac09e5bac2b" Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.719674 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-kube-api-access-vgzsb" (OuterVolumeSpecName: "kube-api-access-vgzsb") pod "e1eb671f-5f5e-4bc1-a560-1ad1daa99569" (UID: "e1eb671f-5f5e-4bc1-a560-1ad1daa99569"). InnerVolumeSpecName "kube-api-access-vgzsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.738022 4693 scope.go:117] "RemoveContainer" containerID="37807a2023a82b02918a6962e585a6b1a51a8ed2a2c71d208e67f4ee984001c4" Dec 12 16:13:22 crc kubenswrapper[4693]: E1212 16:13:22.742464 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37807a2023a82b02918a6962e585a6b1a51a8ed2a2c71d208e67f4ee984001c4\": container with ID starting with 37807a2023a82b02918a6962e585a6b1a51a8ed2a2c71d208e67f4ee984001c4 not found: ID does not exist" containerID="37807a2023a82b02918a6962e585a6b1a51a8ed2a2c71d208e67f4ee984001c4" Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.742787 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37807a2023a82b02918a6962e585a6b1a51a8ed2a2c71d208e67f4ee984001c4"} err="failed to get container status \"37807a2023a82b02918a6962e585a6b1a51a8ed2a2c71d208e67f4ee984001c4\": rpc error: code = NotFound desc = could not find container \"37807a2023a82b02918a6962e585a6b1a51a8ed2a2c71d208e67f4ee984001c4\": container with ID starting with 37807a2023a82b02918a6962e585a6b1a51a8ed2a2c71d208e67f4ee984001c4 not found: ID does not exist" Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.742883 4693 scope.go:117] "RemoveContainer" containerID="d548fe4a5402457d58dd1cec67538a3a83a7988082cd5576756b9ac09e5bac2b" Dec 12 16:13:22 crc kubenswrapper[4693]: E1212 16:13:22.746394 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d548fe4a5402457d58dd1cec67538a3a83a7988082cd5576756b9ac09e5bac2b\": container with ID starting with d548fe4a5402457d58dd1cec67538a3a83a7988082cd5576756b9ac09e5bac2b not found: ID does not exist" containerID="d548fe4a5402457d58dd1cec67538a3a83a7988082cd5576756b9ac09e5bac2b" Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.746442 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d548fe4a5402457d58dd1cec67538a3a83a7988082cd5576756b9ac09e5bac2b"} err="failed to get container status \"d548fe4a5402457d58dd1cec67538a3a83a7988082cd5576756b9ac09e5bac2b\": rpc error: code = NotFound desc = could not find container \"d548fe4a5402457d58dd1cec67538a3a83a7988082cd5576756b9ac09e5bac2b\": container with ID starting with d548fe4a5402457d58dd1cec67538a3a83a7988082cd5576756b9ac09e5bac2b not found: ID does not exist" Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.800981 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgzsb\" (UniqueName: \"kubernetes.io/projected/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-kube-api-access-vgzsb\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.855996 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e1eb671f-5f5e-4bc1-a560-1ad1daa99569" (UID: "e1eb671f-5f5e-4bc1-a560-1ad1daa99569"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.867193 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e1eb671f-5f5e-4bc1-a560-1ad1daa99569" (UID: "e1eb671f-5f5e-4bc1-a560-1ad1daa99569"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.903677 4693 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.903733 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:22 crc kubenswrapper[4693]: I1212 16:13:22.911693 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e1eb671f-5f5e-4bc1-a560-1ad1daa99569" (UID: "e1eb671f-5f5e-4bc1-a560-1ad1daa99569"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:13:23 crc kubenswrapper[4693]: I1212 16:13:23.009852 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:23 crc kubenswrapper[4693]: I1212 16:13:23.055766 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-config" (OuterVolumeSpecName: "config") pod "e1eb671f-5f5e-4bc1-a560-1ad1daa99569" (UID: "e1eb671f-5f5e-4bc1-a560-1ad1daa99569"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:13:23 crc kubenswrapper[4693]: I1212 16:13:23.084870 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e1eb671f-5f5e-4bc1-a560-1ad1daa99569" (UID: "e1eb671f-5f5e-4bc1-a560-1ad1daa99569"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:13:23 crc kubenswrapper[4693]: I1212 16:13:23.112417 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:23 crc kubenswrapper[4693]: I1212 16:13:23.112780 4693 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1eb671f-5f5e-4bc1-a560-1ad1daa99569-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:23 crc kubenswrapper[4693]: I1212 16:13:23.273650 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-v8pg5"] Dec 12 16:13:23 crc kubenswrapper[4693]: I1212 16:13:23.294856 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-v8pg5"] Dec 12 16:13:23 crc kubenswrapper[4693]: I1212 16:13:23.441179 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1eb671f-5f5e-4bc1-a560-1ad1daa99569" path="/var/lib/kubelet/pods/e1eb671f-5f5e-4bc1-a560-1ad1daa99569/volumes" Dec 12 16:13:23 crc kubenswrapper[4693]: I1212 16:13:23.611957 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9b67354-eb2d-4ade-bc4a-096d9e0b9791","Type":"ContainerStarted","Data":"d60dbc8e9e816078cd2978fc224ebc7f601dd63e3098705808f6e4c56d6fadd2"} Dec 12 16:13:23 crc kubenswrapper[4693]: I1212 16:13:23.616297 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a1033146-d0fa-4659-9043-48fd5b1a5dee","Type":"ContainerStarted","Data":"cdf7d44856e5a7226c6b7e3559e4a275739d1d680aed307d6d3e23192d1a0e64"} Dec 12 16:13:23 crc kubenswrapper[4693]: I1212 16:13:23.667473 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" event={"ID":"e73400cc-c2d1-4e09-94b2-38ad2d5a3058","Type":"ContainerStarted","Data":"dee514335d58c5942ca31bc163d90679b98e6a7c1885fcd4ea02437a78426476"} Dec 12 16:13:23 crc kubenswrapper[4693]: I1212 16:13:23.668837 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:23 crc kubenswrapper[4693]: I1212 16:13:23.731610 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" podStartSLOduration=4.731589194 podStartE2EDuration="4.731589194s" podCreationTimestamp="2025-12-12 16:13:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:13:23.726783465 +0000 UTC m=+1630.895423066" watchObservedRunningTime="2025-12-12 16:13:23.731589194 +0000 UTC m=+1630.900228795" Dec 12 16:13:23 crc kubenswrapper[4693]: I1212 16:13:23.851633 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:24 crc kubenswrapper[4693]: I1212 16:13:24.554059 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 12 16:13:24 crc kubenswrapper[4693]: I1212 16:13:24.689565 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a1033146-d0fa-4659-9043-48fd5b1a5dee","Type":"ContainerStarted","Data":"c1fa3ca89031d35ba0ed3ce2080be294109768af6c117ffe4694420f860b058b"} Dec 12 16:13:24 crc kubenswrapper[4693]: I1212 16:13:24.690867 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 12 16:13:24 crc kubenswrapper[4693]: I1212 16:13:24.714881 4693 generic.go:334] "Generic (PLEG): container finished" podID="d5518acc-0be1-4b59-874d-61eeb018a534" containerID="f0684416f63a3faafc53939611786d5ca688417f4c669ac6ab70fbc3c3d18802" exitCode=0 Dec 12 16:13:24 crc kubenswrapper[4693]: I1212 16:13:24.714987 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68cb54fffb-t6qwm" event={"ID":"d5518acc-0be1-4b59-874d-61eeb018a534","Type":"ContainerDied","Data":"f0684416f63a3faafc53939611786d5ca688417f4c669ac6ab70fbc3c3d18802"} Dec 12 16:13:24 crc kubenswrapper[4693]: I1212 16:13:24.727059 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9b67354-eb2d-4ade-bc4a-096d9e0b9791","Type":"ContainerStarted","Data":"4c78635e88ffc50cb76174a91f620a324f579e0c56181a8a4d6d9312a4b81d4b"} Dec 12 16:13:24 crc kubenswrapper[4693]: I1212 16:13:24.733450 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.733421033 podStartE2EDuration="5.733421033s" podCreationTimestamp="2025-12-12 16:13:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:13:24.711816301 +0000 UTC m=+1631.880455902" watchObservedRunningTime="2025-12-12 16:13:24.733421033 +0000 UTC m=+1631.902060634" Dec 12 16:13:24 crc kubenswrapper[4693]: I1212 16:13:24.739471 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea456678-a355-439f-8e5b-28c85dac7da6","Type":"ContainerStarted","Data":"1f980a6adcbc3f484e6ba1ab38fb873135d48813907334d25887a58441da8539"} Dec 12 16:13:24 crc kubenswrapper[4693]: I1212 16:13:24.759876 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68cb54fffb-t6qwm" Dec 12 16:13:24 crc kubenswrapper[4693]: I1212 16:13:24.930726 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwxrb\" (UniqueName: \"kubernetes.io/projected/d5518acc-0be1-4b59-874d-61eeb018a534-kube-api-access-qwxrb\") pod \"d5518acc-0be1-4b59-874d-61eeb018a534\" (UID: \"d5518acc-0be1-4b59-874d-61eeb018a534\") " Dec 12 16:13:24 crc kubenswrapper[4693]: I1212 16:13:24.931019 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-combined-ca-bundle\") pod \"d5518acc-0be1-4b59-874d-61eeb018a534\" (UID: \"d5518acc-0be1-4b59-874d-61eeb018a534\") " Dec 12 16:13:24 crc kubenswrapper[4693]: I1212 16:13:24.931055 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-httpd-config\") pod \"d5518acc-0be1-4b59-874d-61eeb018a534\" (UID: \"d5518acc-0be1-4b59-874d-61eeb018a534\") " Dec 12 16:13:24 crc kubenswrapper[4693]: I1212 16:13:24.931124 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-ovndb-tls-certs\") pod \"d5518acc-0be1-4b59-874d-61eeb018a534\" (UID: \"d5518acc-0be1-4b59-874d-61eeb018a534\") " Dec 12 16:13:24 crc kubenswrapper[4693]: I1212 16:13:24.931376 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-config\") pod \"d5518acc-0be1-4b59-874d-61eeb018a534\" (UID: \"d5518acc-0be1-4b59-874d-61eeb018a534\") " Dec 12 16:13:24 crc kubenswrapper[4693]: I1212 16:13:24.938540 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d5518acc-0be1-4b59-874d-61eeb018a534" (UID: "d5518acc-0be1-4b59-874d-61eeb018a534"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:24 crc kubenswrapper[4693]: I1212 16:13:24.939598 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5518acc-0be1-4b59-874d-61eeb018a534-kube-api-access-qwxrb" (OuterVolumeSpecName: "kube-api-access-qwxrb") pod "d5518acc-0be1-4b59-874d-61eeb018a534" (UID: "d5518acc-0be1-4b59-874d-61eeb018a534"). InnerVolumeSpecName "kube-api-access-qwxrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:13:25 crc kubenswrapper[4693]: I1212 16:13:25.023477 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5518acc-0be1-4b59-874d-61eeb018a534" (UID: "d5518acc-0be1-4b59-874d-61eeb018a534"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:25 crc kubenswrapper[4693]: I1212 16:13:25.038113 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwxrb\" (UniqueName: \"kubernetes.io/projected/d5518acc-0be1-4b59-874d-61eeb018a534-kube-api-access-qwxrb\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:25 crc kubenswrapper[4693]: I1212 16:13:25.038175 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:25 crc kubenswrapper[4693]: I1212 16:13:25.038188 4693 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:25 crc kubenswrapper[4693]: I1212 16:13:25.106402 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-config" (OuterVolumeSpecName: "config") pod "d5518acc-0be1-4b59-874d-61eeb018a534" (UID: "d5518acc-0be1-4b59-874d-61eeb018a534"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:25 crc kubenswrapper[4693]: I1212 16:13:25.125494 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d5518acc-0be1-4b59-874d-61eeb018a534" (UID: "d5518acc-0be1-4b59-874d-61eeb018a534"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:25 crc kubenswrapper[4693]: I1212 16:13:25.142931 4693 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:25 crc kubenswrapper[4693]: I1212 16:13:25.142964 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d5518acc-0be1-4b59-874d-61eeb018a534-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:25 crc kubenswrapper[4693]: I1212 16:13:25.673918 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:25 crc kubenswrapper[4693]: I1212 16:13:25.764118 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea456678-a355-439f-8e5b-28c85dac7da6","Type":"ContainerStarted","Data":"139b33de7ec3df8e687a11f628ceb839bb61a87d920881c407cc4cdb6c3ef16a"} Dec 12 16:13:25 crc kubenswrapper[4693]: I1212 16:13:25.780896 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68cb54fffb-t6qwm" Dec 12 16:13:25 crc kubenswrapper[4693]: I1212 16:13:25.782143 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68cb54fffb-t6qwm" event={"ID":"d5518acc-0be1-4b59-874d-61eeb018a534","Type":"ContainerDied","Data":"a92cf39e7ee3c6946df5263b8fd9a5ef46303dc5dd58e9d95782239103f70ab9"} Dec 12 16:13:25 crc kubenswrapper[4693]: I1212 16:13:25.782183 4693 scope.go:117] "RemoveContainer" containerID="caa1e11e8c9cc286ab857822f3d6e99bf2856df54e087cb0fc6b52ceede8c666" Dec 12 16:13:25 crc kubenswrapper[4693]: I1212 16:13:25.810966 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.421838721 podStartE2EDuration="6.810933468s" podCreationTimestamp="2025-12-12 16:13:19 +0000 UTC" firstStartedPulling="2025-12-12 16:13:20.981025364 +0000 UTC m=+1628.149664965" lastFinishedPulling="2025-12-12 16:13:22.370120111 +0000 UTC m=+1629.538759712" observedRunningTime="2025-12-12 16:13:25.794110745 +0000 UTC m=+1632.962750356" watchObservedRunningTime="2025-12-12 16:13:25.810933468 +0000 UTC m=+1632.979573069" Dec 12 16:13:25 crc kubenswrapper[4693]: I1212 16:13:25.822020 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9b67354-eb2d-4ade-bc4a-096d9e0b9791","Type":"ContainerStarted","Data":"4199dbc803e76efad2b01d09b01976cd73c96ef2282fd5d932840a4833b52cff"} Dec 12 16:13:25 crc kubenswrapper[4693]: I1212 16:13:25.822347 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 12 16:13:25 crc kubenswrapper[4693]: I1212 16:13:25.823494 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a1033146-d0fa-4659-9043-48fd5b1a5dee" containerName="cinder-api-log" containerID="cri-o://cdf7d44856e5a7226c6b7e3559e4a275739d1d680aed307d6d3e23192d1a0e64" gracePeriod=30 Dec 12 16:13:25 crc kubenswrapper[4693]: I1212 16:13:25.823771 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a1033146-d0fa-4659-9043-48fd5b1a5dee" containerName="cinder-api" containerID="cri-o://c1fa3ca89031d35ba0ed3ce2080be294109768af6c117ffe4694420f860b058b" gracePeriod=30 Dec 12 16:13:25 crc kubenswrapper[4693]: I1212 16:13:25.849117 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-68cb54fffb-t6qwm"] Dec 12 16:13:25 crc kubenswrapper[4693]: I1212 16:13:25.876486 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-68cb54fffb-t6qwm"] Dec 12 16:13:25 crc kubenswrapper[4693]: I1212 16:13:25.890791 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.441556543 podStartE2EDuration="10.890761055s" podCreationTimestamp="2025-12-12 16:13:15 +0000 UTC" firstStartedPulling="2025-12-12 16:13:18.641291426 +0000 UTC m=+1625.809931027" lastFinishedPulling="2025-12-12 16:13:25.090495938 +0000 UTC m=+1632.259135539" observedRunningTime="2025-12-12 16:13:25.869038581 +0000 UTC m=+1633.037678182" watchObservedRunningTime="2025-12-12 16:13:25.890761055 +0000 UTC m=+1633.059400656" Dec 12 16:13:25 crc kubenswrapper[4693]: I1212 16:13:25.891477 4693 scope.go:117] "RemoveContainer" containerID="f0684416f63a3faafc53939611786d5ca688417f4c669ac6ab70fbc3c3d18802" Dec 12 16:13:26 crc kubenswrapper[4693]: I1212 16:13:26.358159 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:13:26 crc kubenswrapper[4693]: E1212 16:13:26.364814 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:13:26 crc kubenswrapper[4693]: I1212 16:13:26.839720 4693 generic.go:334] "Generic (PLEG): container finished" podID="a1033146-d0fa-4659-9043-48fd5b1a5dee" containerID="c1fa3ca89031d35ba0ed3ce2080be294109768af6c117ffe4694420f860b058b" exitCode=0 Dec 12 16:13:26 crc kubenswrapper[4693]: I1212 16:13:26.839749 4693 generic.go:334] "Generic (PLEG): container finished" podID="a1033146-d0fa-4659-9043-48fd5b1a5dee" containerID="cdf7d44856e5a7226c6b7e3559e4a275739d1d680aed307d6d3e23192d1a0e64" exitCode=143 Dec 12 16:13:26 crc kubenswrapper[4693]: I1212 16:13:26.839783 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a1033146-d0fa-4659-9043-48fd5b1a5dee","Type":"ContainerDied","Data":"c1fa3ca89031d35ba0ed3ce2080be294109768af6c117ffe4694420f860b058b"} Dec 12 16:13:26 crc kubenswrapper[4693]: I1212 16:13:26.839811 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a1033146-d0fa-4659-9043-48fd5b1a5dee","Type":"ContainerDied","Data":"cdf7d44856e5a7226c6b7e3559e4a275739d1d680aed307d6d3e23192d1a0e64"} Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.214872 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.325626 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1033146-d0fa-4659-9043-48fd5b1a5dee-etc-machine-id\") pod \"a1033146-d0fa-4659-9043-48fd5b1a5dee\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.325738 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-config-data-custom\") pod \"a1033146-d0fa-4659-9043-48fd5b1a5dee\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.325989 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-config-data\") pod \"a1033146-d0fa-4659-9043-48fd5b1a5dee\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.326051 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mqbc\" (UniqueName: \"kubernetes.io/projected/a1033146-d0fa-4659-9043-48fd5b1a5dee-kube-api-access-6mqbc\") pod \"a1033146-d0fa-4659-9043-48fd5b1a5dee\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.326220 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-combined-ca-bundle\") pod \"a1033146-d0fa-4659-9043-48fd5b1a5dee\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.326387 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-scripts\") pod \"a1033146-d0fa-4659-9043-48fd5b1a5dee\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.326481 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1033146-d0fa-4659-9043-48fd5b1a5dee-logs\") pod \"a1033146-d0fa-4659-9043-48fd5b1a5dee\" (UID: \"a1033146-d0fa-4659-9043-48fd5b1a5dee\") " Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.328663 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1033146-d0fa-4659-9043-48fd5b1a5dee-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a1033146-d0fa-4659-9043-48fd5b1a5dee" (UID: "a1033146-d0fa-4659-9043-48fd5b1a5dee"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.330519 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1033146-d0fa-4659-9043-48fd5b1a5dee-logs" (OuterVolumeSpecName: "logs") pod "a1033146-d0fa-4659-9043-48fd5b1a5dee" (UID: "a1033146-d0fa-4659-9043-48fd5b1a5dee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.344976 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1033146-d0fa-4659-9043-48fd5b1a5dee-kube-api-access-6mqbc" (OuterVolumeSpecName: "kube-api-access-6mqbc") pod "a1033146-d0fa-4659-9043-48fd5b1a5dee" (UID: "a1033146-d0fa-4659-9043-48fd5b1a5dee"). InnerVolumeSpecName "kube-api-access-6mqbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.350207 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-scripts" (OuterVolumeSpecName: "scripts") pod "a1033146-d0fa-4659-9043-48fd5b1a5dee" (UID: "a1033146-d0fa-4659-9043-48fd5b1a5dee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.367420 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a1033146-d0fa-4659-9043-48fd5b1a5dee" (UID: "a1033146-d0fa-4659-9043-48fd5b1a5dee"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.383431 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5518acc-0be1-4b59-874d-61eeb018a534" path="/var/lib/kubelet/pods/d5518acc-0be1-4b59-874d-61eeb018a534/volumes" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.402887 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1033146-d0fa-4659-9043-48fd5b1a5dee" (UID: "a1033146-d0fa-4659-9043-48fd5b1a5dee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.430591 4693 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1033146-d0fa-4659-9043-48fd5b1a5dee-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.430633 4693 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.430649 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mqbc\" (UniqueName: \"kubernetes.io/projected/a1033146-d0fa-4659-9043-48fd5b1a5dee-kube-api-access-6mqbc\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.430663 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.430674 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.430688 4693 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1033146-d0fa-4659-9043-48fd5b1a5dee-logs\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.472524 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-config-data" (OuterVolumeSpecName: "config-data") pod "a1033146-d0fa-4659-9043-48fd5b1a5dee" (UID: "a1033146-d0fa-4659-9043-48fd5b1a5dee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.537436 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1033146-d0fa-4659-9043-48fd5b1a5dee-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.781838 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.785627 4693 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","poda85781c9-4fb2-4c98-802d-d9fa60ff72e0"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort poda85781c9-4fb2-4c98-802d-d9fa60ff72e0] : Timed out while waiting for systemd to remove kubepods-besteffort-poda85781c9_4fb2_4c98_802d_d9fa60ff72e0.slice" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.865803 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a1033146-d0fa-4659-9043-48fd5b1a5dee","Type":"ContainerDied","Data":"2cc5595a60cfdd104d040b3f3c11022007b9f167fe4479594402d3fdaf99b247"} Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.865867 4693 scope.go:117] "RemoveContainer" containerID="c1fa3ca89031d35ba0ed3ce2080be294109768af6c117ffe4694420f860b058b" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.865867 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.912241 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.919957 4693 scope.go:117] "RemoveContainer" containerID="cdf7d44856e5a7226c6b7e3559e4a275739d1d680aed307d6d3e23192d1a0e64" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.922217 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.979082 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 12 16:13:27 crc kubenswrapper[4693]: E1212 16:13:27.980427 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1eb671f-5f5e-4bc1-a560-1ad1daa99569" containerName="init" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.980448 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1eb671f-5f5e-4bc1-a560-1ad1daa99569" containerName="init" Dec 12 16:13:27 crc kubenswrapper[4693]: E1212 16:13:27.980467 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5518acc-0be1-4b59-874d-61eeb018a534" containerName="neutron-httpd" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.980473 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5518acc-0be1-4b59-874d-61eeb018a534" containerName="neutron-httpd" Dec 12 16:13:27 crc kubenswrapper[4693]: E1212 16:13:27.980485 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5518acc-0be1-4b59-874d-61eeb018a534" containerName="neutron-api" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.980491 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5518acc-0be1-4b59-874d-61eeb018a534" containerName="neutron-api" Dec 12 16:13:27 crc kubenswrapper[4693]: E1212 16:13:27.980503 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1033146-d0fa-4659-9043-48fd5b1a5dee" containerName="cinder-api-log" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.980510 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1033146-d0fa-4659-9043-48fd5b1a5dee" containerName="cinder-api-log" Dec 12 16:13:27 crc kubenswrapper[4693]: E1212 16:13:27.980533 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1eb671f-5f5e-4bc1-a560-1ad1daa99569" containerName="dnsmasq-dns" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.980540 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1eb671f-5f5e-4bc1-a560-1ad1daa99569" containerName="dnsmasq-dns" Dec 12 16:13:27 crc kubenswrapper[4693]: E1212 16:13:27.980552 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1033146-d0fa-4659-9043-48fd5b1a5dee" containerName="cinder-api" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.980558 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1033146-d0fa-4659-9043-48fd5b1a5dee" containerName="cinder-api" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.980802 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1eb671f-5f5e-4bc1-a560-1ad1daa99569" containerName="dnsmasq-dns" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.980816 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5518acc-0be1-4b59-874d-61eeb018a534" containerName="neutron-api" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.980830 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5518acc-0be1-4b59-874d-61eeb018a534" containerName="neutron-httpd" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.980844 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1033146-d0fa-4659-9043-48fd5b1a5dee" containerName="cinder-api" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.980861 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1033146-d0fa-4659-9043-48fd5b1a5dee" containerName="cinder-api-log" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.982131 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.988369 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.988559 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 12 16:13:27 crc kubenswrapper[4693]: I1212 16:13:27.988622 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.001941 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.155673 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/432f487a-816d-4e6f-96da-27a9151f9fee-config-data-custom\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.155725 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/432f487a-816d-4e6f-96da-27a9151f9fee-config-data\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.155755 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/432f487a-816d-4e6f-96da-27a9151f9fee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.155844 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/432f487a-816d-4e6f-96da-27a9151f9fee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.156059 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/432f487a-816d-4e6f-96da-27a9151f9fee-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.156588 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/432f487a-816d-4e6f-96da-27a9151f9fee-public-tls-certs\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.156721 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7scb\" (UniqueName: \"kubernetes.io/projected/432f487a-816d-4e6f-96da-27a9151f9fee-kube-api-access-q7scb\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.156780 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/432f487a-816d-4e6f-96da-27a9151f9fee-logs\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.156816 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/432f487a-816d-4e6f-96da-27a9151f9fee-scripts\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.258605 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/432f487a-816d-4e6f-96da-27a9151f9fee-scripts\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.258707 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/432f487a-816d-4e6f-96da-27a9151f9fee-config-data-custom\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.258783 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/432f487a-816d-4e6f-96da-27a9151f9fee-config-data\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.258813 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/432f487a-816d-4e6f-96da-27a9151f9fee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.258886 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/432f487a-816d-4e6f-96da-27a9151f9fee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.258925 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/432f487a-816d-4e6f-96da-27a9151f9fee-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.259060 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/432f487a-816d-4e6f-96da-27a9151f9fee-public-tls-certs\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.259128 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7scb\" (UniqueName: \"kubernetes.io/projected/432f487a-816d-4e6f-96da-27a9151f9fee-kube-api-access-q7scb\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.259200 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/432f487a-816d-4e6f-96da-27a9151f9fee-logs\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.259857 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/432f487a-816d-4e6f-96da-27a9151f9fee-logs\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.261556 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/432f487a-816d-4e6f-96da-27a9151f9fee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.268812 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/432f487a-816d-4e6f-96da-27a9151f9fee-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.270825 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/432f487a-816d-4e6f-96da-27a9151f9fee-scripts\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.275251 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/432f487a-816d-4e6f-96da-27a9151f9fee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.275524 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/432f487a-816d-4e6f-96da-27a9151f9fee-config-data\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.276652 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/432f487a-816d-4e6f-96da-27a9151f9fee-public-tls-certs\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.282341 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7scb\" (UniqueName: \"kubernetes.io/projected/432f487a-816d-4e6f-96da-27a9151f9fee-kube-api-access-q7scb\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.308027 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/432f487a-816d-4e6f-96da-27a9151f9fee-config-data-custom\") pod \"cinder-api-0\" (UID: \"432f487a-816d-4e6f-96da-27a9151f9fee\") " pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.329801 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.552739 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7485c9c4f8-fs9g8" Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.666008 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-566c474f44-2zhg2"] Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.666331 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-566c474f44-2zhg2" podUID="750f263e-745d-4b68-94ee-ab44877c8403" containerName="barbican-api-log" containerID="cri-o://06f64021df8724dbd0e9d56a98d9045bfc0aebe13eee918ee760735098831916" gracePeriod=30 Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.666760 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-566c474f44-2zhg2" podUID="750f263e-745d-4b68-94ee-ab44877c8403" containerName="barbican-api" containerID="cri-o://1a9a39cf6ffa14412f47924f1fe8cb0356b4023014bd2fd6c250b6e73fc26234" gracePeriod=30 Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.894721 4693 generic.go:334] "Generic (PLEG): container finished" podID="750f263e-745d-4b68-94ee-ab44877c8403" containerID="06f64021df8724dbd0e9d56a98d9045bfc0aebe13eee918ee760735098831916" exitCode=143 Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.894795 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-566c474f44-2zhg2" event={"ID":"750f263e-745d-4b68-94ee-ab44877c8403","Type":"ContainerDied","Data":"06f64021df8724dbd0e9d56a98d9045bfc0aebe13eee918ee760735098831916"} Dec 12 16:13:28 crc kubenswrapper[4693]: I1212 16:13:28.965183 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 12 16:13:28 crc kubenswrapper[4693]: W1212 16:13:28.989635 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod432f487a_816d_4e6f_96da_27a9151f9fee.slice/crio-fd31d937fe6ca696098ed97e86ecb9995b06dfe4d55c087cf6e7689f8d9615bf WatchSource:0}: Error finding container fd31d937fe6ca696098ed97e86ecb9995b06dfe4d55c087cf6e7689f8d9615bf: Status 404 returned error can't find the container with id fd31d937fe6ca696098ed97e86ecb9995b06dfe4d55c087cf6e7689f8d9615bf Dec 12 16:13:29 crc kubenswrapper[4693]: I1212 16:13:29.402138 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1033146-d0fa-4659-9043-48fd5b1a5dee" path="/var/lib/kubelet/pods/a1033146-d0fa-4659-9043-48fd5b1a5dee/volumes" Dec 12 16:13:29 crc kubenswrapper[4693]: I1212 16:13:29.915116 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"432f487a-816d-4e6f-96da-27a9151f9fee","Type":"ContainerStarted","Data":"1db66480422e45679c55f44b4feca8c5fcd350987711c6805d65145b726b668c"} Dec 12 16:13:29 crc kubenswrapper[4693]: I1212 16:13:29.915449 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"432f487a-816d-4e6f-96da-27a9151f9fee","Type":"ContainerStarted","Data":"fd31d937fe6ca696098ed97e86ecb9995b06dfe4d55c087cf6e7689f8d9615bf"} Dec 12 16:13:30 crc kubenswrapper[4693]: I1212 16:13:30.168139 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 12 16:13:30 crc kubenswrapper[4693]: I1212 16:13:30.385505 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:30 crc kubenswrapper[4693]: I1212 16:13:30.516918 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-kcgxg"] Dec 12 16:13:30 crc kubenswrapper[4693]: I1212 16:13:30.517502 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" podUID="c7f0e3f3-2cff-44bd-9f42-9736dc1947bb" containerName="dnsmasq-dns" containerID="cri-o://803d23edb766c615520609a0d9fe607f675ce9478744b4ac67b439452885d411" gracePeriod=10 Dec 12 16:13:30 crc kubenswrapper[4693]: I1212 16:13:30.764245 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 12 16:13:30 crc kubenswrapper[4693]: I1212 16:13:30.831772 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-79886c984d-rtppc" Dec 12 16:13:30 crc kubenswrapper[4693]: I1212 16:13:30.952204 4693 generic.go:334] "Generic (PLEG): container finished" podID="c7f0e3f3-2cff-44bd-9f42-9736dc1947bb" containerID="803d23edb766c615520609a0d9fe607f675ce9478744b4ac67b439452885d411" exitCode=0 Dec 12 16:13:30 crc kubenswrapper[4693]: I1212 16:13:30.956991 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" event={"ID":"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb","Type":"ContainerDied","Data":"803d23edb766c615520609a0d9fe607f675ce9478744b4ac67b439452885d411"} Dec 12 16:13:31 crc kubenswrapper[4693]: I1212 16:13:31.013924 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-79886c984d-rtppc" Dec 12 16:13:31 crc kubenswrapper[4693]: I1212 16:13:31.014357 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 12 16:13:31 crc kubenswrapper[4693]: I1212 16:13:31.501020 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:13:31 crc kubenswrapper[4693]: I1212 16:13:31.638685 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-dns-svc\") pod \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " Dec 12 16:13:31 crc kubenswrapper[4693]: I1212 16:13:31.638786 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz9hn\" (UniqueName: \"kubernetes.io/projected/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-kube-api-access-wz9hn\") pod \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " Dec 12 16:13:31 crc kubenswrapper[4693]: I1212 16:13:31.638933 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-dns-swift-storage-0\") pod \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " Dec 12 16:13:31 crc kubenswrapper[4693]: I1212 16:13:31.638970 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-config\") pod \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " Dec 12 16:13:31 crc kubenswrapper[4693]: I1212 16:13:31.639061 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-ovsdbserver-sb\") pod \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " Dec 12 16:13:31 crc kubenswrapper[4693]: I1212 16:13:31.639165 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-ovsdbserver-nb\") pod \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\" (UID: \"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb\") " Dec 12 16:13:31 crc kubenswrapper[4693]: I1212 16:13:31.663129 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-kube-api-access-wz9hn" (OuterVolumeSpecName: "kube-api-access-wz9hn") pod "c7f0e3f3-2cff-44bd-9f42-9736dc1947bb" (UID: "c7f0e3f3-2cff-44bd-9f42-9736dc1947bb"). InnerVolumeSpecName "kube-api-access-wz9hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:13:31 crc kubenswrapper[4693]: I1212 16:13:31.724094 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c7f0e3f3-2cff-44bd-9f42-9736dc1947bb" (UID: "c7f0e3f3-2cff-44bd-9f42-9736dc1947bb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:13:31 crc kubenswrapper[4693]: I1212 16:13:31.736430 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-777f5984bd-7smjx" Dec 12 16:13:31 crc kubenswrapper[4693]: I1212 16:13:31.742836 4693 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:31 crc kubenswrapper[4693]: I1212 16:13:31.742873 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz9hn\" (UniqueName: \"kubernetes.io/projected/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-kube-api-access-wz9hn\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:31 crc kubenswrapper[4693]: I1212 16:13:31.774004 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c7f0e3f3-2cff-44bd-9f42-9736dc1947bb" (UID: "c7f0e3f3-2cff-44bd-9f42-9736dc1947bb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:13:31 crc kubenswrapper[4693]: I1212 16:13:31.774125 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c7f0e3f3-2cff-44bd-9f42-9736dc1947bb" (UID: "c7f0e3f3-2cff-44bd-9f42-9736dc1947bb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:13:31 crc kubenswrapper[4693]: I1212 16:13:31.795868 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c7f0e3f3-2cff-44bd-9f42-9736dc1947bb" (UID: "c7f0e3f3-2cff-44bd-9f42-9736dc1947bb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:13:31 crc kubenswrapper[4693]: I1212 16:13:31.821248 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-config" (OuterVolumeSpecName: "config") pod "c7f0e3f3-2cff-44bd-9f42-9736dc1947bb" (UID: "c7f0e3f3-2cff-44bd-9f42-9736dc1947bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:13:31 crc kubenswrapper[4693]: I1212 16:13:31.847963 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:31 crc kubenswrapper[4693]: I1212 16:13:31.847999 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:31 crc kubenswrapper[4693]: I1212 16:13:31.848009 4693 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:31 crc kubenswrapper[4693]: I1212 16:13:31.848020 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.008687 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"432f487a-816d-4e6f-96da-27a9151f9fee","Type":"ContainerStarted","Data":"949543d74f8564d7bd103fac4faccf7ddde1829c3c3726b8dd5d358a838f4e4f"} Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.010913 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.012925 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ea456678-a355-439f-8e5b-28c85dac7da6" containerName="cinder-scheduler" containerID="cri-o://1f980a6adcbc3f484e6ba1ab38fb873135d48813907334d25887a58441da8539" gracePeriod=30 Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.013365 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.013521 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ea456678-a355-439f-8e5b-28c85dac7da6" containerName="probe" containerID="cri-o://139b33de7ec3df8e687a11f628ceb839bb61a87d920881c407cc4cdb6c3ef16a" gracePeriod=30 Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.013408 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-kcgxg" event={"ID":"c7f0e3f3-2cff-44bd-9f42-9736dc1947bb","Type":"ContainerDied","Data":"3f9245c51a24ac5b01d3b8ce19467edb1e379cfd42cffec51c4d7b7f4c3a6789"} Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.013836 4693 scope.go:117] "RemoveContainer" containerID="803d23edb766c615520609a0d9fe607f675ce9478744b4ac67b439452885d411" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.051505 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.051475934 podStartE2EDuration="5.051475934s" podCreationTimestamp="2025-12-12 16:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:13:32.041428254 +0000 UTC m=+1639.210067865" watchObservedRunningTime="2025-12-12 16:13:32.051475934 +0000 UTC m=+1639.220115535" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.064586 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-566c474f44-2zhg2" podUID="750f263e-745d-4b68-94ee-ab44877c8403" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.197:9311/healthcheck\": read tcp 10.217.0.2:57796->10.217.0.197:9311: read: connection reset by peer" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.069384 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-566c474f44-2zhg2" podUID="750f263e-745d-4b68-94ee-ab44877c8403" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.197:9311/healthcheck\": read tcp 10.217.0.2:57786->10.217.0.197:9311: read: connection reset by peer" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.082560 4693 scope.go:117] "RemoveContainer" containerID="944bdd326496ed4b3f51540d97a851a4fa42bf210e03d618070553ca24900ea6" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.185256 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 12 16:13:32 crc kubenswrapper[4693]: E1212 16:13:32.185870 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f0e3f3-2cff-44bd-9f42-9736dc1947bb" containerName="dnsmasq-dns" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.185895 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f0e3f3-2cff-44bd-9f42-9736dc1947bb" containerName="dnsmasq-dns" Dec 12 16:13:32 crc kubenswrapper[4693]: E1212 16:13:32.185940 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f0e3f3-2cff-44bd-9f42-9736dc1947bb" containerName="init" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.185951 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f0e3f3-2cff-44bd-9f42-9736dc1947bb" containerName="init" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.186173 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f0e3f3-2cff-44bd-9f42-9736dc1947bb" containerName="dnsmasq-dns" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.187331 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.193511 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.193720 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.193912 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-qwbtg" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.225264 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.263854 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhlqb\" (UniqueName: \"kubernetes.io/projected/12c7ffc7-9bea-4d52-99aa-130f36148263-kube-api-access-nhlqb\") pod \"openstackclient\" (UID: \"12c7ffc7-9bea-4d52-99aa-130f36148263\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.263929 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c7ffc7-9bea-4d52-99aa-130f36148263-combined-ca-bundle\") pod \"openstackclient\" (UID: \"12c7ffc7-9bea-4d52-99aa-130f36148263\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.263980 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/12c7ffc7-9bea-4d52-99aa-130f36148263-openstack-config-secret\") pod \"openstackclient\" (UID: \"12c7ffc7-9bea-4d52-99aa-130f36148263\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.264062 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/12c7ffc7-9bea-4d52-99aa-130f36148263-openstack-config\") pod \"openstackclient\" (UID: \"12c7ffc7-9bea-4d52-99aa-130f36148263\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.294645 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-kcgxg"] Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.318894 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-kcgxg"] Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.367055 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/12c7ffc7-9bea-4d52-99aa-130f36148263-openstack-config\") pod \"openstackclient\" (UID: \"12c7ffc7-9bea-4d52-99aa-130f36148263\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.367203 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhlqb\" (UniqueName: \"kubernetes.io/projected/12c7ffc7-9bea-4d52-99aa-130f36148263-kube-api-access-nhlqb\") pod \"openstackclient\" (UID: \"12c7ffc7-9bea-4d52-99aa-130f36148263\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.367244 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c7ffc7-9bea-4d52-99aa-130f36148263-combined-ca-bundle\") pod \"openstackclient\" (UID: \"12c7ffc7-9bea-4d52-99aa-130f36148263\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.367338 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/12c7ffc7-9bea-4d52-99aa-130f36148263-openstack-config-secret\") pod \"openstackclient\" (UID: \"12c7ffc7-9bea-4d52-99aa-130f36148263\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.368757 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/12c7ffc7-9bea-4d52-99aa-130f36148263-openstack-config\") pod \"openstackclient\" (UID: \"12c7ffc7-9bea-4d52-99aa-130f36148263\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.378536 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/12c7ffc7-9bea-4d52-99aa-130f36148263-openstack-config-secret\") pod \"openstackclient\" (UID: \"12c7ffc7-9bea-4d52-99aa-130f36148263\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.381926 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c7ffc7-9bea-4d52-99aa-130f36148263-combined-ca-bundle\") pod \"openstackclient\" (UID: \"12c7ffc7-9bea-4d52-99aa-130f36148263\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.387753 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhlqb\" (UniqueName: \"kubernetes.io/projected/12c7ffc7-9bea-4d52-99aa-130f36148263-kube-api-access-nhlqb\") pod \"openstackclient\" (UID: \"12c7ffc7-9bea-4d52-99aa-130f36148263\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.465118 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.481220 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.483036 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.501566 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.503324 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.525345 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.578193 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/aa5c116e-ba6c-42ba-b865-b32b51104014-openstack-config\") pod \"openstackclient\" (UID: \"aa5c116e-ba6c-42ba-b865-b32b51104014\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.578330 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/aa5c116e-ba6c-42ba-b865-b32b51104014-openstack-config-secret\") pod \"openstackclient\" (UID: \"aa5c116e-ba6c-42ba-b865-b32b51104014\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.578371 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbthj\" (UniqueName: \"kubernetes.io/projected/aa5c116e-ba6c-42ba-b865-b32b51104014-kube-api-access-wbthj\") pod \"openstackclient\" (UID: \"aa5c116e-ba6c-42ba-b865-b32b51104014\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.578417 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5c116e-ba6c-42ba-b865-b32b51104014-combined-ca-bundle\") pod \"openstackclient\" (UID: \"aa5c116e-ba6c-42ba-b865-b32b51104014\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.600752 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:32 crc kubenswrapper[4693]: E1212 16:13:32.641572 4693 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 12 16:13:32 crc kubenswrapper[4693]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_12c7ffc7-9bea-4d52-99aa-130f36148263_0(207bf5140645482189f6d7b128bc813afdd5f36ba5e7953edc26e5981cf46e60): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"207bf5140645482189f6d7b128bc813afdd5f36ba5e7953edc26e5981cf46e60" Netns:"/var/run/netns/d8d07c0e-78ba-45c7-9609-4c18ed2a6264" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=207bf5140645482189f6d7b128bc813afdd5f36ba5e7953edc26e5981cf46e60;K8S_POD_UID=12c7ffc7-9bea-4d52-99aa-130f36148263" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/12c7ffc7-9bea-4d52-99aa-130f36148263]: expected pod UID "12c7ffc7-9bea-4d52-99aa-130f36148263" but got "aa5c116e-ba6c-42ba-b865-b32b51104014" from Kube API Dec 12 16:13:32 crc kubenswrapper[4693]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 12 16:13:32 crc kubenswrapper[4693]: > Dec 12 16:13:32 crc kubenswrapper[4693]: E1212 16:13:32.641669 4693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 12 16:13:32 crc kubenswrapper[4693]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_12c7ffc7-9bea-4d52-99aa-130f36148263_0(207bf5140645482189f6d7b128bc813afdd5f36ba5e7953edc26e5981cf46e60): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"207bf5140645482189f6d7b128bc813afdd5f36ba5e7953edc26e5981cf46e60" Netns:"/var/run/netns/d8d07c0e-78ba-45c7-9609-4c18ed2a6264" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=207bf5140645482189f6d7b128bc813afdd5f36ba5e7953edc26e5981cf46e60;K8S_POD_UID=12c7ffc7-9bea-4d52-99aa-130f36148263" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/12c7ffc7-9bea-4d52-99aa-130f36148263]: expected pod UID "12c7ffc7-9bea-4d52-99aa-130f36148263" but got "aa5c116e-ba6c-42ba-b865-b32b51104014" from Kube API Dec 12 16:13:32 crc kubenswrapper[4693]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 12 16:13:32 crc kubenswrapper[4693]: > pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.679840 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/750f263e-745d-4b68-94ee-ab44877c8403-combined-ca-bundle\") pod \"750f263e-745d-4b68-94ee-ab44877c8403\" (UID: \"750f263e-745d-4b68-94ee-ab44877c8403\") " Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.679922 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/750f263e-745d-4b68-94ee-ab44877c8403-logs\") pod \"750f263e-745d-4b68-94ee-ab44877c8403\" (UID: \"750f263e-745d-4b68-94ee-ab44877c8403\") " Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.680151 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/750f263e-745d-4b68-94ee-ab44877c8403-config-data\") pod \"750f263e-745d-4b68-94ee-ab44877c8403\" (UID: \"750f263e-745d-4b68-94ee-ab44877c8403\") " Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.680242 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/750f263e-745d-4b68-94ee-ab44877c8403-config-data-custom\") pod \"750f263e-745d-4b68-94ee-ab44877c8403\" (UID: \"750f263e-745d-4b68-94ee-ab44877c8403\") " Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.680359 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxvjm\" (UniqueName: \"kubernetes.io/projected/750f263e-745d-4b68-94ee-ab44877c8403-kube-api-access-gxvjm\") pod \"750f263e-745d-4b68-94ee-ab44877c8403\" (UID: \"750f263e-745d-4b68-94ee-ab44877c8403\") " Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.680914 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/aa5c116e-ba6c-42ba-b865-b32b51104014-openstack-config\") pod \"openstackclient\" (UID: \"aa5c116e-ba6c-42ba-b865-b32b51104014\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.681020 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/aa5c116e-ba6c-42ba-b865-b32b51104014-openstack-config-secret\") pod \"openstackclient\" (UID: \"aa5c116e-ba6c-42ba-b865-b32b51104014\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.681056 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbthj\" (UniqueName: \"kubernetes.io/projected/aa5c116e-ba6c-42ba-b865-b32b51104014-kube-api-access-wbthj\") pod \"openstackclient\" (UID: \"aa5c116e-ba6c-42ba-b865-b32b51104014\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.681103 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5c116e-ba6c-42ba-b865-b32b51104014-combined-ca-bundle\") pod \"openstackclient\" (UID: \"aa5c116e-ba6c-42ba-b865-b32b51104014\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.682443 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/750f263e-745d-4b68-94ee-ab44877c8403-logs" (OuterVolumeSpecName: "logs") pod "750f263e-745d-4b68-94ee-ab44877c8403" (UID: "750f263e-745d-4b68-94ee-ab44877c8403"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.683365 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/aa5c116e-ba6c-42ba-b865-b32b51104014-openstack-config\") pod \"openstackclient\" (UID: \"aa5c116e-ba6c-42ba-b865-b32b51104014\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.689598 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/750f263e-745d-4b68-94ee-ab44877c8403-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "750f263e-745d-4b68-94ee-ab44877c8403" (UID: "750f263e-745d-4b68-94ee-ab44877c8403"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.690003 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/750f263e-745d-4b68-94ee-ab44877c8403-kube-api-access-gxvjm" (OuterVolumeSpecName: "kube-api-access-gxvjm") pod "750f263e-745d-4b68-94ee-ab44877c8403" (UID: "750f263e-745d-4b68-94ee-ab44877c8403"). InnerVolumeSpecName "kube-api-access-gxvjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.698732 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5c116e-ba6c-42ba-b865-b32b51104014-combined-ca-bundle\") pod \"openstackclient\" (UID: \"aa5c116e-ba6c-42ba-b865-b32b51104014\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.712586 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/aa5c116e-ba6c-42ba-b865-b32b51104014-openstack-config-secret\") pod \"openstackclient\" (UID: \"aa5c116e-ba6c-42ba-b865-b32b51104014\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.715974 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbthj\" (UniqueName: \"kubernetes.io/projected/aa5c116e-ba6c-42ba-b865-b32b51104014-kube-api-access-wbthj\") pod \"openstackclient\" (UID: \"aa5c116e-ba6c-42ba-b865-b32b51104014\") " pod="openstack/openstackclient" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.747823 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/750f263e-745d-4b68-94ee-ab44877c8403-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "750f263e-745d-4b68-94ee-ab44877c8403" (UID: "750f263e-745d-4b68-94ee-ab44877c8403"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.761438 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/750f263e-745d-4b68-94ee-ab44877c8403-config-data" (OuterVolumeSpecName: "config-data") pod "750f263e-745d-4b68-94ee-ab44877c8403" (UID: "750f263e-745d-4b68-94ee-ab44877c8403"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.783251 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/750f263e-745d-4b68-94ee-ab44877c8403-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.783309 4693 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/750f263e-745d-4b68-94ee-ab44877c8403-logs\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.783323 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/750f263e-745d-4b68-94ee-ab44877c8403-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.783336 4693 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/750f263e-745d-4b68-94ee-ab44877c8403-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.783349 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxvjm\" (UniqueName: \"kubernetes.io/projected/750f263e-745d-4b68-94ee-ab44877c8403-kube-api-access-gxvjm\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:32 crc kubenswrapper[4693]: I1212 16:13:32.862658 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.058298 4693 generic.go:334] "Generic (PLEG): container finished" podID="750f263e-745d-4b68-94ee-ab44877c8403" containerID="1a9a39cf6ffa14412f47924f1fe8cb0356b4023014bd2fd6c250b6e73fc26234" exitCode=0 Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.058528 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-566c474f44-2zhg2" event={"ID":"750f263e-745d-4b68-94ee-ab44877c8403","Type":"ContainerDied","Data":"1a9a39cf6ffa14412f47924f1fe8cb0356b4023014bd2fd6c250b6e73fc26234"} Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.058504 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-566c474f44-2zhg2" Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.058581 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-566c474f44-2zhg2" event={"ID":"750f263e-745d-4b68-94ee-ab44877c8403","Type":"ContainerDied","Data":"a2bda1e5d0cb30d7486645d15e884753da552693674cc42106df79e1664e0699"} Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.058605 4693 scope.go:117] "RemoveContainer" containerID="1a9a39cf6ffa14412f47924f1fe8cb0356b4023014bd2fd6c250b6e73fc26234" Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.075537 4693 generic.go:334] "Generic (PLEG): container finished" podID="ea456678-a355-439f-8e5b-28c85dac7da6" containerID="139b33de7ec3df8e687a11f628ceb839bb61a87d920881c407cc4cdb6c3ef16a" exitCode=0 Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.075623 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea456678-a355-439f-8e5b-28c85dac7da6","Type":"ContainerDied","Data":"139b33de7ec3df8e687a11f628ceb839bb61a87d920881c407cc4cdb6c3ef16a"} Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.075880 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.079077 4693 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="12c7ffc7-9bea-4d52-99aa-130f36148263" podUID="aa5c116e-ba6c-42ba-b865-b32b51104014" Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.093346 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.128984 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-566c474f44-2zhg2"] Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.150289 4693 scope.go:117] "RemoveContainer" containerID="06f64021df8724dbd0e9d56a98d9045bfc0aebe13eee918ee760735098831916" Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.155769 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-566c474f44-2zhg2"] Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.191195 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/12c7ffc7-9bea-4d52-99aa-130f36148263-openstack-config\") pod \"12c7ffc7-9bea-4d52-99aa-130f36148263\" (UID: \"12c7ffc7-9bea-4d52-99aa-130f36148263\") " Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.191443 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhlqb\" (UniqueName: \"kubernetes.io/projected/12c7ffc7-9bea-4d52-99aa-130f36148263-kube-api-access-nhlqb\") pod \"12c7ffc7-9bea-4d52-99aa-130f36148263\" (UID: \"12c7ffc7-9bea-4d52-99aa-130f36148263\") " Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.191486 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/12c7ffc7-9bea-4d52-99aa-130f36148263-openstack-config-secret\") pod \"12c7ffc7-9bea-4d52-99aa-130f36148263\" (UID: \"12c7ffc7-9bea-4d52-99aa-130f36148263\") " Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.191632 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c7ffc7-9bea-4d52-99aa-130f36148263-combined-ca-bundle\") pod \"12c7ffc7-9bea-4d52-99aa-130f36148263\" (UID: \"12c7ffc7-9bea-4d52-99aa-130f36148263\") " Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.193600 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12c7ffc7-9bea-4d52-99aa-130f36148263-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "12c7ffc7-9bea-4d52-99aa-130f36148263" (UID: "12c7ffc7-9bea-4d52-99aa-130f36148263"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.214771 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c7ffc7-9bea-4d52-99aa-130f36148263-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "12c7ffc7-9bea-4d52-99aa-130f36148263" (UID: "12c7ffc7-9bea-4d52-99aa-130f36148263"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.226612 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12c7ffc7-9bea-4d52-99aa-130f36148263-kube-api-access-nhlqb" (OuterVolumeSpecName: "kube-api-access-nhlqb") pod "12c7ffc7-9bea-4d52-99aa-130f36148263" (UID: "12c7ffc7-9bea-4d52-99aa-130f36148263"). InnerVolumeSpecName "kube-api-access-nhlqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.227461 4693 scope.go:117] "RemoveContainer" containerID="1a9a39cf6ffa14412f47924f1fe8cb0356b4023014bd2fd6c250b6e73fc26234" Dec 12 16:13:33 crc kubenswrapper[4693]: E1212 16:13:33.227991 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a9a39cf6ffa14412f47924f1fe8cb0356b4023014bd2fd6c250b6e73fc26234\": container with ID starting with 1a9a39cf6ffa14412f47924f1fe8cb0356b4023014bd2fd6c250b6e73fc26234 not found: ID does not exist" containerID="1a9a39cf6ffa14412f47924f1fe8cb0356b4023014bd2fd6c250b6e73fc26234" Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.228030 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a9a39cf6ffa14412f47924f1fe8cb0356b4023014bd2fd6c250b6e73fc26234"} err="failed to get container status \"1a9a39cf6ffa14412f47924f1fe8cb0356b4023014bd2fd6c250b6e73fc26234\": rpc error: code = NotFound desc = could not find container \"1a9a39cf6ffa14412f47924f1fe8cb0356b4023014bd2fd6c250b6e73fc26234\": container with ID starting with 1a9a39cf6ffa14412f47924f1fe8cb0356b4023014bd2fd6c250b6e73fc26234 not found: ID does not exist" Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.228058 4693 scope.go:117] "RemoveContainer" containerID="06f64021df8724dbd0e9d56a98d9045bfc0aebe13eee918ee760735098831916" Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.234451 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c7ffc7-9bea-4d52-99aa-130f36148263-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12c7ffc7-9bea-4d52-99aa-130f36148263" (UID: "12c7ffc7-9bea-4d52-99aa-130f36148263"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:33 crc kubenswrapper[4693]: E1212 16:13:33.244543 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06f64021df8724dbd0e9d56a98d9045bfc0aebe13eee918ee760735098831916\": container with ID starting with 06f64021df8724dbd0e9d56a98d9045bfc0aebe13eee918ee760735098831916 not found: ID does not exist" containerID="06f64021df8724dbd0e9d56a98d9045bfc0aebe13eee918ee760735098831916" Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.244604 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f64021df8724dbd0e9d56a98d9045bfc0aebe13eee918ee760735098831916"} err="failed to get container status \"06f64021df8724dbd0e9d56a98d9045bfc0aebe13eee918ee760735098831916\": rpc error: code = NotFound desc = could not find container \"06f64021df8724dbd0e9d56a98d9045bfc0aebe13eee918ee760735098831916\": container with ID starting with 06f64021df8724dbd0e9d56a98d9045bfc0aebe13eee918ee760735098831916 not found: ID does not exist" Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.297113 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c7ffc7-9bea-4d52-99aa-130f36148263-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.297156 4693 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/12c7ffc7-9bea-4d52-99aa-130f36148263-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.297169 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhlqb\" (UniqueName: \"kubernetes.io/projected/12c7ffc7-9bea-4d52-99aa-130f36148263-kube-api-access-nhlqb\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.297180 4693 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/12c7ffc7-9bea-4d52-99aa-130f36148263-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.418335 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12c7ffc7-9bea-4d52-99aa-130f36148263" path="/var/lib/kubelet/pods/12c7ffc7-9bea-4d52-99aa-130f36148263/volumes" Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.418783 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="750f263e-745d-4b68-94ee-ab44877c8403" path="/var/lib/kubelet/pods/750f263e-745d-4b68-94ee-ab44877c8403/volumes" Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.429507 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7f0e3f3-2cff-44bd-9f42-9736dc1947bb" path="/var/lib/kubelet/pods/c7f0e3f3-2cff-44bd-9f42-9736dc1947bb/volumes" Dec 12 16:13:33 crc kubenswrapper[4693]: I1212 16:13:33.480290 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.093957 4693 generic.go:334] "Generic (PLEG): container finished" podID="ea456678-a355-439f-8e5b-28c85dac7da6" containerID="1f980a6adcbc3f484e6ba1ab38fb873135d48813907334d25887a58441da8539" exitCode=0 Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.094210 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea456678-a355-439f-8e5b-28c85dac7da6","Type":"ContainerDied","Data":"1f980a6adcbc3f484e6ba1ab38fb873135d48813907334d25887a58441da8539"} Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.095542 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.096370 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"aa5c116e-ba6c-42ba-b865-b32b51104014","Type":"ContainerStarted","Data":"0833e43cb1d33dde78a567a4d11c928367fd3312d2ccd21f42a2872dc8b510c4"} Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.101991 4693 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="12c7ffc7-9bea-4d52-99aa-130f36148263" podUID="aa5c116e-ba6c-42ba-b865-b32b51104014" Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.257367 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.330507 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-combined-ca-bundle\") pod \"ea456678-a355-439f-8e5b-28c85dac7da6\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.330627 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mthct\" (UniqueName: \"kubernetes.io/projected/ea456678-a355-439f-8e5b-28c85dac7da6-kube-api-access-mthct\") pod \"ea456678-a355-439f-8e5b-28c85dac7da6\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.330744 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-config-data-custom\") pod \"ea456678-a355-439f-8e5b-28c85dac7da6\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.330790 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-scripts\") pod \"ea456678-a355-439f-8e5b-28c85dac7da6\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.330858 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-config-data\") pod \"ea456678-a355-439f-8e5b-28c85dac7da6\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.331087 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea456678-a355-439f-8e5b-28c85dac7da6-etc-machine-id\") pod \"ea456678-a355-439f-8e5b-28c85dac7da6\" (UID: \"ea456678-a355-439f-8e5b-28c85dac7da6\") " Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.331807 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea456678-a355-439f-8e5b-28c85dac7da6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ea456678-a355-439f-8e5b-28c85dac7da6" (UID: "ea456678-a355-439f-8e5b-28c85dac7da6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.338879 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ea456678-a355-439f-8e5b-28c85dac7da6" (UID: "ea456678-a355-439f-8e5b-28c85dac7da6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.354337 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea456678-a355-439f-8e5b-28c85dac7da6-kube-api-access-mthct" (OuterVolumeSpecName: "kube-api-access-mthct") pod "ea456678-a355-439f-8e5b-28c85dac7da6" (UID: "ea456678-a355-439f-8e5b-28c85dac7da6"). InnerVolumeSpecName "kube-api-access-mthct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.355089 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-scripts" (OuterVolumeSpecName: "scripts") pod "ea456678-a355-439f-8e5b-28c85dac7da6" (UID: "ea456678-a355-439f-8e5b-28c85dac7da6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.412337 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea456678-a355-439f-8e5b-28c85dac7da6" (UID: "ea456678-a355-439f-8e5b-28c85dac7da6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.434089 4693 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.434131 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.434143 4693 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea456678-a355-439f-8e5b-28c85dac7da6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.434156 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.434168 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mthct\" (UniqueName: \"kubernetes.io/projected/ea456678-a355-439f-8e5b-28c85dac7da6-kube-api-access-mthct\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.485002 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-config-data" (OuterVolumeSpecName: "config-data") pod "ea456678-a355-439f-8e5b-28c85dac7da6" (UID: "ea456678-a355-439f-8e5b-28c85dac7da6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:34 crc kubenswrapper[4693]: I1212 16:13:34.538642 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea456678-a355-439f-8e5b-28c85dac7da6-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.111296 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea456678-a355-439f-8e5b-28c85dac7da6","Type":"ContainerDied","Data":"a6b6bee533cd26a47bb16a3f1fc607ecae040d70815ea57f805145eec67f6595"} Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.111626 4693 scope.go:117] "RemoveContainer" containerID="139b33de7ec3df8e687a11f628ceb839bb61a87d920881c407cc4cdb6c3ef16a" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.111416 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.154265 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.165793 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.171334 4693 scope.go:117] "RemoveContainer" containerID="1f980a6adcbc3f484e6ba1ab38fb873135d48813907334d25887a58441da8539" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.178565 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 12 16:13:35 crc kubenswrapper[4693]: E1212 16:13:35.179253 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="750f263e-745d-4b68-94ee-ab44877c8403" containerName="barbican-api" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.179287 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="750f263e-745d-4b68-94ee-ab44877c8403" containerName="barbican-api" Dec 12 16:13:35 crc kubenswrapper[4693]: E1212 16:13:35.179318 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea456678-a355-439f-8e5b-28c85dac7da6" containerName="cinder-scheduler" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.179324 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea456678-a355-439f-8e5b-28c85dac7da6" containerName="cinder-scheduler" Dec 12 16:13:35 crc kubenswrapper[4693]: E1212 16:13:35.179353 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea456678-a355-439f-8e5b-28c85dac7da6" containerName="probe" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.179359 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea456678-a355-439f-8e5b-28c85dac7da6" containerName="probe" Dec 12 16:13:35 crc kubenswrapper[4693]: E1212 16:13:35.179385 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="750f263e-745d-4b68-94ee-ab44877c8403" containerName="barbican-api-log" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.179391 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="750f263e-745d-4b68-94ee-ab44877c8403" containerName="barbican-api-log" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.179605 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="750f263e-745d-4b68-94ee-ab44877c8403" containerName="barbican-api-log" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.179624 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea456678-a355-439f-8e5b-28c85dac7da6" containerName="probe" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.179637 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="750f263e-745d-4b68-94ee-ab44877c8403" containerName="barbican-api" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.179648 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea456678-a355-439f-8e5b-28c85dac7da6" containerName="cinder-scheduler" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.180936 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.183744 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.194649 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.254117 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f98101ce-5311-42f6-951c-e0b8dd94641b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f98101ce-5311-42f6-951c-e0b8dd94641b\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.254221 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f98101ce-5311-42f6-951c-e0b8dd94641b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f98101ce-5311-42f6-951c-e0b8dd94641b\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.254436 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f98101ce-5311-42f6-951c-e0b8dd94641b-scripts\") pod \"cinder-scheduler-0\" (UID: \"f98101ce-5311-42f6-951c-e0b8dd94641b\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.254499 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98101ce-5311-42f6-951c-e0b8dd94641b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f98101ce-5311-42f6-951c-e0b8dd94641b\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.254539 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxcl8\" (UniqueName: \"kubernetes.io/projected/f98101ce-5311-42f6-951c-e0b8dd94641b-kube-api-access-kxcl8\") pod \"cinder-scheduler-0\" (UID: \"f98101ce-5311-42f6-951c-e0b8dd94641b\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.254597 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98101ce-5311-42f6-951c-e0b8dd94641b-config-data\") pod \"cinder-scheduler-0\" (UID: \"f98101ce-5311-42f6-951c-e0b8dd94641b\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.356069 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f98101ce-5311-42f6-951c-e0b8dd94641b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f98101ce-5311-42f6-951c-e0b8dd94641b\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.356159 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f98101ce-5311-42f6-951c-e0b8dd94641b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f98101ce-5311-42f6-951c-e0b8dd94641b\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.356175 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f98101ce-5311-42f6-951c-e0b8dd94641b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f98101ce-5311-42f6-951c-e0b8dd94641b\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.356328 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f98101ce-5311-42f6-951c-e0b8dd94641b-scripts\") pod \"cinder-scheduler-0\" (UID: \"f98101ce-5311-42f6-951c-e0b8dd94641b\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.356380 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98101ce-5311-42f6-951c-e0b8dd94641b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f98101ce-5311-42f6-951c-e0b8dd94641b\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.356423 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxcl8\" (UniqueName: \"kubernetes.io/projected/f98101ce-5311-42f6-951c-e0b8dd94641b-kube-api-access-kxcl8\") pod \"cinder-scheduler-0\" (UID: \"f98101ce-5311-42f6-951c-e0b8dd94641b\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.356544 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98101ce-5311-42f6-951c-e0b8dd94641b-config-data\") pod \"cinder-scheduler-0\" (UID: \"f98101ce-5311-42f6-951c-e0b8dd94641b\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.361494 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98101ce-5311-42f6-951c-e0b8dd94641b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f98101ce-5311-42f6-951c-e0b8dd94641b\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.361897 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f98101ce-5311-42f6-951c-e0b8dd94641b-scripts\") pod \"cinder-scheduler-0\" (UID: \"f98101ce-5311-42f6-951c-e0b8dd94641b\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.362822 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f98101ce-5311-42f6-951c-e0b8dd94641b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f98101ce-5311-42f6-951c-e0b8dd94641b\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.363016 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98101ce-5311-42f6-951c-e0b8dd94641b-config-data\") pod \"cinder-scheduler-0\" (UID: \"f98101ce-5311-42f6-951c-e0b8dd94641b\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.375939 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxcl8\" (UniqueName: \"kubernetes.io/projected/f98101ce-5311-42f6-951c-e0b8dd94641b-kube-api-access-kxcl8\") pod \"cinder-scheduler-0\" (UID: \"f98101ce-5311-42f6-951c-e0b8dd94641b\") " pod="openstack/cinder-scheduler-0" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.378724 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea456678-a355-439f-8e5b-28c85dac7da6" path="/var/lib/kubelet/pods/ea456678-a355-439f-8e5b-28c85dac7da6/volumes" Dec 12 16:13:35 crc kubenswrapper[4693]: I1212 16:13:35.515802 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 12 16:13:36 crc kubenswrapper[4693]: I1212 16:13:36.111714 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 12 16:13:37 crc kubenswrapper[4693]: I1212 16:13:37.145121 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f98101ce-5311-42f6-951c-e0b8dd94641b","Type":"ContainerStarted","Data":"72b0b942a93ffffdbb799c3a665c85b2551dc03de28c51ff400e54dae2f01eaa"} Dec 12 16:13:37 crc kubenswrapper[4693]: I1212 16:13:37.145541 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f98101ce-5311-42f6-951c-e0b8dd94641b","Type":"ContainerStarted","Data":"7bb993a60332871da8376e8557c8a2b1b535b70d97df85112cde876c4015a5f3"} Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.127758 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-85fcc456ff-kvgm6"] Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.132084 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-85fcc456ff-kvgm6" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.139911 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.140485 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.140602 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-kn7cj" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.186096 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f98101ce-5311-42f6-951c-e0b8dd94641b","Type":"ContainerStarted","Data":"5e58b7b452dfc8edac52c94b983b8dc9e2f4db5b85603d035c38c3d71eaac1ec"} Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.194213 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-85fcc456ff-kvgm6"] Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.237642 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173f6b35-611d-436f-839c-64b2bee96977-config-data\") pod \"heat-engine-85fcc456ff-kvgm6\" (UID: \"173f6b35-611d-436f-839c-64b2bee96977\") " pod="openstack/heat-engine-85fcc456ff-kvgm6" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.237727 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173f6b35-611d-436f-839c-64b2bee96977-combined-ca-bundle\") pod \"heat-engine-85fcc456ff-kvgm6\" (UID: \"173f6b35-611d-436f-839c-64b2bee96977\") " pod="openstack/heat-engine-85fcc456ff-kvgm6" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.237806 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tblwq\" (UniqueName: \"kubernetes.io/projected/173f6b35-611d-436f-839c-64b2bee96977-kube-api-access-tblwq\") pod \"heat-engine-85fcc456ff-kvgm6\" (UID: \"173f6b35-611d-436f-839c-64b2bee96977\") " pod="openstack/heat-engine-85fcc456ff-kvgm6" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.237856 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/173f6b35-611d-436f-839c-64b2bee96977-config-data-custom\") pod \"heat-engine-85fcc456ff-kvgm6\" (UID: \"173f6b35-611d-436f-839c-64b2bee96977\") " pod="openstack/heat-engine-85fcc456ff-kvgm6" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.257330 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-56c75966f4-fvrkb"] Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.267079 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-56c75966f4-fvrkb" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.269912 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.287336 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-4fjxz"] Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.289475 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.321645 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-56c75966f4-fvrkb"] Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.334207 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-4fjxz"] Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.340338 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.340321023 podStartE2EDuration="3.340321023s" podCreationTimestamp="2025-12-12 16:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:13:38.228726401 +0000 UTC m=+1645.397366002" watchObservedRunningTime="2025-12-12 16:13:38.340321023 +0000 UTC m=+1645.508960624" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.348024 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173f6b35-611d-436f-839c-64b2bee96977-config-data\") pod \"heat-engine-85fcc456ff-kvgm6\" (UID: \"173f6b35-611d-436f-839c-64b2bee96977\") " pod="openstack/heat-engine-85fcc456ff-kvgm6" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.348116 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173f6b35-611d-436f-839c-64b2bee96977-combined-ca-bundle\") pod \"heat-engine-85fcc456ff-kvgm6\" (UID: \"173f6b35-611d-436f-839c-64b2bee96977\") " pod="openstack/heat-engine-85fcc456ff-kvgm6" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.348175 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tblwq\" (UniqueName: \"kubernetes.io/projected/173f6b35-611d-436f-839c-64b2bee96977-kube-api-access-tblwq\") pod \"heat-engine-85fcc456ff-kvgm6\" (UID: \"173f6b35-611d-436f-839c-64b2bee96977\") " pod="openstack/heat-engine-85fcc456ff-kvgm6" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.348231 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/173f6b35-611d-436f-839c-64b2bee96977-config-data-custom\") pod \"heat-engine-85fcc456ff-kvgm6\" (UID: \"173f6b35-611d-436f-839c-64b2bee96977\") " pod="openstack/heat-engine-85fcc456ff-kvgm6" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.369959 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173f6b35-611d-436f-839c-64b2bee96977-combined-ca-bundle\") pod \"heat-engine-85fcc456ff-kvgm6\" (UID: \"173f6b35-611d-436f-839c-64b2bee96977\") " pod="openstack/heat-engine-85fcc456ff-kvgm6" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.370822 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/173f6b35-611d-436f-839c-64b2bee96977-config-data-custom\") pod \"heat-engine-85fcc456ff-kvgm6\" (UID: \"173f6b35-611d-436f-839c-64b2bee96977\") " pod="openstack/heat-engine-85fcc456ff-kvgm6" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.374335 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6469c4dff9-vwxp5"] Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.376005 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6469c4dff9-vwxp5" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.380385 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.381189 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173f6b35-611d-436f-839c-64b2bee96977-config-data\") pod \"heat-engine-85fcc456ff-kvgm6\" (UID: \"173f6b35-611d-436f-839c-64b2bee96977\") " pod="openstack/heat-engine-85fcc456ff-kvgm6" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.396817 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6469c4dff9-vwxp5"] Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.404388 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tblwq\" (UniqueName: \"kubernetes.io/projected/173f6b35-611d-436f-839c-64b2bee96977-kube-api-access-tblwq\") pod \"heat-engine-85fcc456ff-kvgm6\" (UID: \"173f6b35-611d-436f-839c-64b2bee96977\") " pod="openstack/heat-engine-85fcc456ff-kvgm6" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.451654 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-config-data\") pod \"heat-cfnapi-56c75966f4-fvrkb\" (UID: \"317fe3cf-1373-4cfe-9cd9-6d80050d4c3c\") " pod="openstack/heat-cfnapi-56c75966f4-fvrkb" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.451698 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw6v7\" (UniqueName: \"kubernetes.io/projected/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-kube-api-access-kw6v7\") pod \"heat-cfnapi-56c75966f4-fvrkb\" (UID: \"317fe3cf-1373-4cfe-9cd9-6d80050d4c3c\") " pod="openstack/heat-cfnapi-56c75966f4-fvrkb" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.451746 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-config-data-custom\") pod \"heat-cfnapi-56c75966f4-fvrkb\" (UID: \"317fe3cf-1373-4cfe-9cd9-6d80050d4c3c\") " pod="openstack/heat-cfnapi-56c75966f4-fvrkb" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.451811 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-config\") pod \"dnsmasq-dns-7756b9d78c-4fjxz\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.451853 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-4fjxz\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.451897 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-4fjxz\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.451944 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-combined-ca-bundle\") pod \"heat-cfnapi-56c75966f4-fvrkb\" (UID: \"317fe3cf-1373-4cfe-9cd9-6d80050d4c3c\") " pod="openstack/heat-cfnapi-56c75966f4-fvrkb" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.451977 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-4fjxz\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.452022 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-4fjxz\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.452049 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f4gf\" (UniqueName: \"kubernetes.io/projected/2b9c8d52-1799-496d-9911-867479d89883-kube-api-access-8f4gf\") pod \"dnsmasq-dns-7756b9d78c-4fjxz\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.467161 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-85fcc456ff-kvgm6" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.548317 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6bd4874f5f-5jsgt"] Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.553707 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-4fjxz\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.553783 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-combined-ca-bundle\") pod \"heat-cfnapi-56c75966f4-fvrkb\" (UID: \"317fe3cf-1373-4cfe-9cd9-6d80050d4c3c\") " pod="openstack/heat-cfnapi-56c75966f4-fvrkb" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.553821 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-4fjxz\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.553893 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-4fjxz\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.553935 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f4gf\" (UniqueName: \"kubernetes.io/projected/2b9c8d52-1799-496d-9911-867479d89883-kube-api-access-8f4gf\") pod \"dnsmasq-dns-7756b9d78c-4fjxz\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.554038 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-config-data\") pod \"heat-cfnapi-56c75966f4-fvrkb\" (UID: \"317fe3cf-1373-4cfe-9cd9-6d80050d4c3c\") " pod="openstack/heat-cfnapi-56c75966f4-fvrkb" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.554054 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw6v7\" (UniqueName: \"kubernetes.io/projected/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-kube-api-access-kw6v7\") pod \"heat-cfnapi-56c75966f4-fvrkb\" (UID: \"317fe3cf-1373-4cfe-9cd9-6d80050d4c3c\") " pod="openstack/heat-cfnapi-56c75966f4-fvrkb" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.554102 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70eecfd3-212d-4a52-9793-811a49b2020c-config-data\") pod \"heat-api-6469c4dff9-vwxp5\" (UID: \"70eecfd3-212d-4a52-9793-811a49b2020c\") " pod="openstack/heat-api-6469c4dff9-vwxp5" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.554121 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-config-data-custom\") pod \"heat-cfnapi-56c75966f4-fvrkb\" (UID: \"317fe3cf-1373-4cfe-9cd9-6d80050d4c3c\") " pod="openstack/heat-cfnapi-56c75966f4-fvrkb" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.554174 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70eecfd3-212d-4a52-9793-811a49b2020c-config-data-custom\") pod \"heat-api-6469c4dff9-vwxp5\" (UID: \"70eecfd3-212d-4a52-9793-811a49b2020c\") " pod="openstack/heat-api-6469c4dff9-vwxp5" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.554222 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-config\") pod \"dnsmasq-dns-7756b9d78c-4fjxz\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.554258 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9njzq\" (UniqueName: \"kubernetes.io/projected/70eecfd3-212d-4a52-9793-811a49b2020c-kube-api-access-9njzq\") pod \"heat-api-6469c4dff9-vwxp5\" (UID: \"70eecfd3-212d-4a52-9793-811a49b2020c\") " pod="openstack/heat-api-6469c4dff9-vwxp5" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.554341 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-4fjxz\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.554364 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70eecfd3-212d-4a52-9793-811a49b2020c-combined-ca-bundle\") pod \"heat-api-6469c4dff9-vwxp5\" (UID: \"70eecfd3-212d-4a52-9793-811a49b2020c\") " pod="openstack/heat-api-6469c4dff9-vwxp5" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.555049 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-4fjxz\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.556354 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.556680 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-4fjxz\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.558017 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-4fjxz\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.559088 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-config\") pod \"dnsmasq-dns-7756b9d78c-4fjxz\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.560072 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-4fjxz\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.565536 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-config-data-custom\") pod \"heat-cfnapi-56c75966f4-fvrkb\" (UID: \"317fe3cf-1373-4cfe-9cd9-6d80050d4c3c\") " pod="openstack/heat-cfnapi-56c75966f4-fvrkb" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.566209 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-combined-ca-bundle\") pod \"heat-cfnapi-56c75966f4-fvrkb\" (UID: \"317fe3cf-1373-4cfe-9cd9-6d80050d4c3c\") " pod="openstack/heat-cfnapi-56c75966f4-fvrkb" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.566684 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-config-data\") pod \"heat-cfnapi-56c75966f4-fvrkb\" (UID: \"317fe3cf-1373-4cfe-9cd9-6d80050d4c3c\") " pod="openstack/heat-cfnapi-56c75966f4-fvrkb" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.571846 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.572024 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.572121 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.580756 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6bd4874f5f-5jsgt"] Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.589042 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw6v7\" (UniqueName: \"kubernetes.io/projected/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-kube-api-access-kw6v7\") pod \"heat-cfnapi-56c75966f4-fvrkb\" (UID: \"317fe3cf-1373-4cfe-9cd9-6d80050d4c3c\") " pod="openstack/heat-cfnapi-56c75966f4-fvrkb" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.605526 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f4gf\" (UniqueName: \"kubernetes.io/projected/2b9c8d52-1799-496d-9911-867479d89883-kube-api-access-8f4gf\") pod \"dnsmasq-dns-7756b9d78c-4fjxz\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.621387 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-56c75966f4-fvrkb" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.652369 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.656712 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70eecfd3-212d-4a52-9793-811a49b2020c-combined-ca-bundle\") pod \"heat-api-6469c4dff9-vwxp5\" (UID: \"70eecfd3-212d-4a52-9793-811a49b2020c\") " pod="openstack/heat-api-6469c4dff9-vwxp5" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.656775 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5xp6\" (UniqueName: \"kubernetes.io/projected/57e95c5b-e99a-474a-bd61-9075a7594455-kube-api-access-s5xp6\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.656847 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e95c5b-e99a-474a-bd61-9075a7594455-combined-ca-bundle\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.656872 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e95c5b-e99a-474a-bd61-9075a7594455-internal-tls-certs\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.656891 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e95c5b-e99a-474a-bd61-9075a7594455-config-data\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.656948 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/57e95c5b-e99a-474a-bd61-9075a7594455-etc-swift\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.656963 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e95c5b-e99a-474a-bd61-9075a7594455-public-tls-certs\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.656999 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57e95c5b-e99a-474a-bd61-9075a7594455-log-httpd\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.657034 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70eecfd3-212d-4a52-9793-811a49b2020c-config-data\") pod \"heat-api-6469c4dff9-vwxp5\" (UID: \"70eecfd3-212d-4a52-9793-811a49b2020c\") " pod="openstack/heat-api-6469c4dff9-vwxp5" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.657069 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70eecfd3-212d-4a52-9793-811a49b2020c-config-data-custom\") pod \"heat-api-6469c4dff9-vwxp5\" (UID: \"70eecfd3-212d-4a52-9793-811a49b2020c\") " pod="openstack/heat-api-6469c4dff9-vwxp5" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.657107 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57e95c5b-e99a-474a-bd61-9075a7594455-run-httpd\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.657129 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9njzq\" (UniqueName: \"kubernetes.io/projected/70eecfd3-212d-4a52-9793-811a49b2020c-kube-api-access-9njzq\") pod \"heat-api-6469c4dff9-vwxp5\" (UID: \"70eecfd3-212d-4a52-9793-811a49b2020c\") " pod="openstack/heat-api-6469c4dff9-vwxp5" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.663573 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70eecfd3-212d-4a52-9793-811a49b2020c-combined-ca-bundle\") pod \"heat-api-6469c4dff9-vwxp5\" (UID: \"70eecfd3-212d-4a52-9793-811a49b2020c\") " pod="openstack/heat-api-6469c4dff9-vwxp5" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.668571 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70eecfd3-212d-4a52-9793-811a49b2020c-config-data\") pod \"heat-api-6469c4dff9-vwxp5\" (UID: \"70eecfd3-212d-4a52-9793-811a49b2020c\") " pod="openstack/heat-api-6469c4dff9-vwxp5" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.681154 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70eecfd3-212d-4a52-9793-811a49b2020c-config-data-custom\") pod \"heat-api-6469c4dff9-vwxp5\" (UID: \"70eecfd3-212d-4a52-9793-811a49b2020c\") " pod="openstack/heat-api-6469c4dff9-vwxp5" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.682042 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9njzq\" (UniqueName: \"kubernetes.io/projected/70eecfd3-212d-4a52-9793-811a49b2020c-kube-api-access-9njzq\") pod \"heat-api-6469c4dff9-vwxp5\" (UID: \"70eecfd3-212d-4a52-9793-811a49b2020c\") " pod="openstack/heat-api-6469c4dff9-vwxp5" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.758889 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57e95c5b-e99a-474a-bd61-9075a7594455-log-httpd\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.759368 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57e95c5b-e99a-474a-bd61-9075a7594455-run-httpd\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.759441 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5xp6\" (UniqueName: \"kubernetes.io/projected/57e95c5b-e99a-474a-bd61-9075a7594455-kube-api-access-s5xp6\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.759531 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e95c5b-e99a-474a-bd61-9075a7594455-combined-ca-bundle\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.759572 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e95c5b-e99a-474a-bd61-9075a7594455-internal-tls-certs\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.759605 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e95c5b-e99a-474a-bd61-9075a7594455-config-data\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.759654 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57e95c5b-e99a-474a-bd61-9075a7594455-log-httpd\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.759695 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/57e95c5b-e99a-474a-bd61-9075a7594455-etc-swift\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.759719 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e95c5b-e99a-474a-bd61-9075a7594455-public-tls-certs\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.760706 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57e95c5b-e99a-474a-bd61-9075a7594455-run-httpd\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.772597 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/57e95c5b-e99a-474a-bd61-9075a7594455-etc-swift\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.773557 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e95c5b-e99a-474a-bd61-9075a7594455-public-tls-certs\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.773752 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e95c5b-e99a-474a-bd61-9075a7594455-combined-ca-bundle\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.774309 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e95c5b-e99a-474a-bd61-9075a7594455-config-data\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.774769 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e95c5b-e99a-474a-bd61-9075a7594455-internal-tls-certs\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.774815 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6469c4dff9-vwxp5" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.782240 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5xp6\" (UniqueName: \"kubernetes.io/projected/57e95c5b-e99a-474a-bd61-9075a7594455-kube-api-access-s5xp6\") pod \"swift-proxy-6bd4874f5f-5jsgt\" (UID: \"57e95c5b-e99a-474a-bd61-9075a7594455\") " pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:38 crc kubenswrapper[4693]: I1212 16:13:38.801638 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:39 crc kubenswrapper[4693]: I1212 16:13:39.115904 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-85fcc456ff-kvgm6"] Dec 12 16:13:39 crc kubenswrapper[4693]: W1212 16:13:39.178927 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod173f6b35_611d_436f_839c_64b2bee96977.slice/crio-dbf9f7361d7476c540dae6f45530f3431c97b288891922aa885e128b93a95085 WatchSource:0}: Error finding container dbf9f7361d7476c540dae6f45530f3431c97b288891922aa885e128b93a95085: Status 404 returned error can't find the container with id dbf9f7361d7476c540dae6f45530f3431c97b288891922aa885e128b93a95085 Dec 12 16:13:39 crc kubenswrapper[4693]: W1212 16:13:39.472089 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod317fe3cf_1373_4cfe_9cd9_6d80050d4c3c.slice/crio-e93441e03b7573202f0c472d015949b0232e993353b52c64620144e735be5862 WatchSource:0}: Error finding container e93441e03b7573202f0c472d015949b0232e993353b52c64620144e735be5862: Status 404 returned error can't find the container with id e93441e03b7573202f0c472d015949b0232e993353b52c64620144e735be5862 Dec 12 16:13:39 crc kubenswrapper[4693]: I1212 16:13:39.477586 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-56c75966f4-fvrkb"] Dec 12 16:13:39 crc kubenswrapper[4693]: I1212 16:13:39.631945 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-4fjxz"] Dec 12 16:13:39 crc kubenswrapper[4693]: W1212 16:13:39.641701 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b9c8d52_1799_496d_9911_867479d89883.slice/crio-b4b82468a89598a6399ef5d319bf7cc6e62421165a292125f7a072e5bb2d0a8a WatchSource:0}: Error finding container b4b82468a89598a6399ef5d319bf7cc6e62421165a292125f7a072e5bb2d0a8a: Status 404 returned error can't find the container with id b4b82468a89598a6399ef5d319bf7cc6e62421165a292125f7a072e5bb2d0a8a Dec 12 16:13:39 crc kubenswrapper[4693]: I1212 16:13:39.930409 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6469c4dff9-vwxp5"] Dec 12 16:13:39 crc kubenswrapper[4693]: W1212 16:13:39.945497 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70eecfd3_212d_4a52_9793_811a49b2020c.slice/crio-dc8657913c473714a48b5df581fb406ca4206d1990ea7857690eb45f3f7c22a3 WatchSource:0}: Error finding container dc8657913c473714a48b5df581fb406ca4206d1990ea7857690eb45f3f7c22a3: Status 404 returned error can't find the container with id dc8657913c473714a48b5df581fb406ca4206d1990ea7857690eb45f3f7c22a3 Dec 12 16:13:40 crc kubenswrapper[4693]: I1212 16:13:40.004898 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6bd4874f5f-5jsgt"] Dec 12 16:13:40 crc kubenswrapper[4693]: I1212 16:13:40.216015 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-85fcc456ff-kvgm6" event={"ID":"173f6b35-611d-436f-839c-64b2bee96977","Type":"ContainerStarted","Data":"83a5da81c826852251080b7b175edc2d7b3d42fb08198e52ea4c4220ccec4cda"} Dec 12 16:13:40 crc kubenswrapper[4693]: I1212 16:13:40.216104 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-85fcc456ff-kvgm6" event={"ID":"173f6b35-611d-436f-839c-64b2bee96977","Type":"ContainerStarted","Data":"dbf9f7361d7476c540dae6f45530f3431c97b288891922aa885e128b93a95085"} Dec 12 16:13:40 crc kubenswrapper[4693]: I1212 16:13:40.216199 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-85fcc456ff-kvgm6" Dec 12 16:13:40 crc kubenswrapper[4693]: I1212 16:13:40.218627 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-56c75966f4-fvrkb" event={"ID":"317fe3cf-1373-4cfe-9cd9-6d80050d4c3c","Type":"ContainerStarted","Data":"e93441e03b7573202f0c472d015949b0232e993353b52c64620144e735be5862"} Dec 12 16:13:40 crc kubenswrapper[4693]: I1212 16:13:40.222118 4693 generic.go:334] "Generic (PLEG): container finished" podID="2b9c8d52-1799-496d-9911-867479d89883" containerID="59e229394981603fc0305f84b89d1074f1a0ef60906a9beeefd5c630d799dbce" exitCode=0 Dec 12 16:13:40 crc kubenswrapper[4693]: I1212 16:13:40.222201 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" event={"ID":"2b9c8d52-1799-496d-9911-867479d89883","Type":"ContainerDied","Data":"59e229394981603fc0305f84b89d1074f1a0ef60906a9beeefd5c630d799dbce"} Dec 12 16:13:40 crc kubenswrapper[4693]: I1212 16:13:40.222236 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" event={"ID":"2b9c8d52-1799-496d-9911-867479d89883","Type":"ContainerStarted","Data":"b4b82468a89598a6399ef5d319bf7cc6e62421165a292125f7a072e5bb2d0a8a"} Dec 12 16:13:40 crc kubenswrapper[4693]: I1212 16:13:40.227304 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6bd4874f5f-5jsgt" event={"ID":"57e95c5b-e99a-474a-bd61-9075a7594455","Type":"ContainerStarted","Data":"5e1961901bf1fb2c4cbaa6ca4aac0b6d2563fe5169370634c1d0fb8aa4005e80"} Dec 12 16:13:40 crc kubenswrapper[4693]: I1212 16:13:40.233034 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6469c4dff9-vwxp5" event={"ID":"70eecfd3-212d-4a52-9793-811a49b2020c","Type":"ContainerStarted","Data":"dc8657913c473714a48b5df581fb406ca4206d1990ea7857690eb45f3f7c22a3"} Dec 12 16:13:40 crc kubenswrapper[4693]: I1212 16:13:40.241987 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-85fcc456ff-kvgm6" podStartSLOduration=2.241972406 podStartE2EDuration="2.241972406s" podCreationTimestamp="2025-12-12 16:13:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:13:40.235003089 +0000 UTC m=+1647.403642690" watchObservedRunningTime="2025-12-12 16:13:40.241972406 +0000 UTC m=+1647.410612017" Dec 12 16:13:40 crc kubenswrapper[4693]: I1212 16:13:40.517153 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 12 16:13:41 crc kubenswrapper[4693]: I1212 16:13:41.250061 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" event={"ID":"2b9c8d52-1799-496d-9911-867479d89883","Type":"ContainerStarted","Data":"b8e4a68e48bcfd6975299d00dc6e1e4ba81b7e28b96cf32504edf6bdb587243d"} Dec 12 16:13:41 crc kubenswrapper[4693]: I1212 16:13:41.251377 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:13:41 crc kubenswrapper[4693]: I1212 16:13:41.260017 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6bd4874f5f-5jsgt" event={"ID":"57e95c5b-e99a-474a-bd61-9075a7594455","Type":"ContainerStarted","Data":"aedfb99fb2a005fdb4d5d1e1601c2cc1cccef5dbfedab03a48174831833f60e9"} Dec 12 16:13:41 crc kubenswrapper[4693]: I1212 16:13:41.260053 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6bd4874f5f-5jsgt" event={"ID":"57e95c5b-e99a-474a-bd61-9075a7594455","Type":"ContainerStarted","Data":"954a4889c1225474cd7d6ff7162d62b2d611eb159b0d7c2ab9211312b525a43a"} Dec 12 16:13:41 crc kubenswrapper[4693]: I1212 16:13:41.260067 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:41 crc kubenswrapper[4693]: I1212 16:13:41.260089 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:41 crc kubenswrapper[4693]: I1212 16:13:41.292891 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" podStartSLOduration=3.292865125 podStartE2EDuration="3.292865125s" podCreationTimestamp="2025-12-12 16:13:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:13:41.271734157 +0000 UTC m=+1648.440373758" watchObservedRunningTime="2025-12-12 16:13:41.292865125 +0000 UTC m=+1648.461504726" Dec 12 16:13:41 crc kubenswrapper[4693]: I1212 16:13:41.311092 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6bd4874f5f-5jsgt" podStartSLOduration=3.311067615 podStartE2EDuration="3.311067615s" podCreationTimestamp="2025-12-12 16:13:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:13:41.293976105 +0000 UTC m=+1648.462615716" watchObservedRunningTime="2025-12-12 16:13:41.311067615 +0000 UTC m=+1648.479707226" Dec 12 16:13:41 crc kubenswrapper[4693]: I1212 16:13:41.357585 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:13:41 crc kubenswrapper[4693]: E1212 16:13:41.357858 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:13:41 crc kubenswrapper[4693]: I1212 16:13:41.849855 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.498300 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-jq8js"] Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.500490 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jq8js" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.544604 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jq8js"] Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.593176 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-v7mxb"] Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.595664 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-v7mxb" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.597095 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjcth\" (UniqueName: \"kubernetes.io/projected/dc0d7ecc-b112-46ba-aceb-681e37cb50a3-kube-api-access-xjcth\") pod \"nova-api-db-create-jq8js\" (UID: \"dc0d7ecc-b112-46ba-aceb-681e37cb50a3\") " pod="openstack/nova-api-db-create-jq8js" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.597179 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc0d7ecc-b112-46ba-aceb-681e37cb50a3-operator-scripts\") pod \"nova-api-db-create-jq8js\" (UID: \"dc0d7ecc-b112-46ba-aceb-681e37cb50a3\") " pod="openstack/nova-api-db-create-jq8js" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.623612 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-v7mxb"] Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.699656 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-954dw\" (UniqueName: \"kubernetes.io/projected/187aaad3-d0f1-4f94-8ee3-05b2df3da7f4-kube-api-access-954dw\") pod \"nova-cell0-db-create-v7mxb\" (UID: \"187aaad3-d0f1-4f94-8ee3-05b2df3da7f4\") " pod="openstack/nova-cell0-db-create-v7mxb" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.699705 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/187aaad3-d0f1-4f94-8ee3-05b2df3da7f4-operator-scripts\") pod \"nova-cell0-db-create-v7mxb\" (UID: \"187aaad3-d0f1-4f94-8ee3-05b2df3da7f4\") " pod="openstack/nova-cell0-db-create-v7mxb" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.699804 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjcth\" (UniqueName: \"kubernetes.io/projected/dc0d7ecc-b112-46ba-aceb-681e37cb50a3-kube-api-access-xjcth\") pod \"nova-api-db-create-jq8js\" (UID: \"dc0d7ecc-b112-46ba-aceb-681e37cb50a3\") " pod="openstack/nova-api-db-create-jq8js" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.699844 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc0d7ecc-b112-46ba-aceb-681e37cb50a3-operator-scripts\") pod \"nova-api-db-create-jq8js\" (UID: \"dc0d7ecc-b112-46ba-aceb-681e37cb50a3\") " pod="openstack/nova-api-db-create-jq8js" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.700589 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc0d7ecc-b112-46ba-aceb-681e37cb50a3-operator-scripts\") pod \"nova-api-db-create-jq8js\" (UID: \"dc0d7ecc-b112-46ba-aceb-681e37cb50a3\") " pod="openstack/nova-api-db-create-jq8js" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.704444 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-5lhks"] Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.706761 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-5lhks" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.739345 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjcth\" (UniqueName: \"kubernetes.io/projected/dc0d7ecc-b112-46ba-aceb-681e37cb50a3-kube-api-access-xjcth\") pod \"nova-api-db-create-jq8js\" (UID: \"dc0d7ecc-b112-46ba-aceb-681e37cb50a3\") " pod="openstack/nova-api-db-create-jq8js" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.754107 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-5lhks"] Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.818888 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9881347-9d47-4c92-93ec-aeee80ff784d-operator-scripts\") pod \"nova-cell1-db-create-5lhks\" (UID: \"c9881347-9d47-4c92-93ec-aeee80ff784d\") " pod="openstack/nova-cell1-db-create-5lhks" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.819068 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgj67\" (UniqueName: \"kubernetes.io/projected/c9881347-9d47-4c92-93ec-aeee80ff784d-kube-api-access-fgj67\") pod \"nova-cell1-db-create-5lhks\" (UID: \"c9881347-9d47-4c92-93ec-aeee80ff784d\") " pod="openstack/nova-cell1-db-create-5lhks" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.819121 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-954dw\" (UniqueName: \"kubernetes.io/projected/187aaad3-d0f1-4f94-8ee3-05b2df3da7f4-kube-api-access-954dw\") pod \"nova-cell0-db-create-v7mxb\" (UID: \"187aaad3-d0f1-4f94-8ee3-05b2df3da7f4\") " pod="openstack/nova-cell0-db-create-v7mxb" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.819178 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/187aaad3-d0f1-4f94-8ee3-05b2df3da7f4-operator-scripts\") pod \"nova-cell0-db-create-v7mxb\" (UID: \"187aaad3-d0f1-4f94-8ee3-05b2df3da7f4\") " pod="openstack/nova-cell0-db-create-v7mxb" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.819949 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/187aaad3-d0f1-4f94-8ee3-05b2df3da7f4-operator-scripts\") pod \"nova-cell0-db-create-v7mxb\" (UID: \"187aaad3-d0f1-4f94-8ee3-05b2df3da7f4\") " pod="openstack/nova-cell0-db-create-v7mxb" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.828340 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jq8js" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.845920 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-954dw\" (UniqueName: \"kubernetes.io/projected/187aaad3-d0f1-4f94-8ee3-05b2df3da7f4-kube-api-access-954dw\") pod \"nova-cell0-db-create-v7mxb\" (UID: \"187aaad3-d0f1-4f94-8ee3-05b2df3da7f4\") " pod="openstack/nova-cell0-db-create-v7mxb" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.882543 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-7e7f-account-create-update-f9glz"] Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.884342 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7e7f-account-create-update-f9glz" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.890485 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.914010 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7e7f-account-create-update-f9glz"] Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.922214 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9881347-9d47-4c92-93ec-aeee80ff784d-operator-scripts\") pod \"nova-cell1-db-create-5lhks\" (UID: \"c9881347-9d47-4c92-93ec-aeee80ff784d\") " pod="openstack/nova-cell1-db-create-5lhks" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.922998 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgj67\" (UniqueName: \"kubernetes.io/projected/c9881347-9d47-4c92-93ec-aeee80ff784d-kube-api-access-fgj67\") pod \"nova-cell1-db-create-5lhks\" (UID: \"c9881347-9d47-4c92-93ec-aeee80ff784d\") " pod="openstack/nova-cell1-db-create-5lhks" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.926128 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-v7mxb" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.928877 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9881347-9d47-4c92-93ec-aeee80ff784d-operator-scripts\") pod \"nova-cell1-db-create-5lhks\" (UID: \"c9881347-9d47-4c92-93ec-aeee80ff784d\") " pod="openstack/nova-cell1-db-create-5lhks" Dec 12 16:13:42 crc kubenswrapper[4693]: I1212 16:13:42.939507 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgj67\" (UniqueName: \"kubernetes.io/projected/c9881347-9d47-4c92-93ec-aeee80ff784d-kube-api-access-fgj67\") pod \"nova-cell1-db-create-5lhks\" (UID: \"c9881347-9d47-4c92-93ec-aeee80ff784d\") " pod="openstack/nova-cell1-db-create-5lhks" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.031612 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08fa7b7-2bf5-4695-be3c-4a455172a896-operator-scripts\") pod \"nova-api-7e7f-account-create-update-f9glz\" (UID: \"b08fa7b7-2bf5-4695-be3c-4a455172a896\") " pod="openstack/nova-api-7e7f-account-create-update-f9glz" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.031710 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6zpq\" (UniqueName: \"kubernetes.io/projected/b08fa7b7-2bf5-4695-be3c-4a455172a896-kube-api-access-q6zpq\") pod \"nova-api-7e7f-account-create-update-f9glz\" (UID: \"b08fa7b7-2bf5-4695-be3c-4a455172a896\") " pod="openstack/nova-api-7e7f-account-create-update-f9glz" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.067184 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.067642 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" containerName="ceilometer-central-agent" containerID="cri-o://3167185dcb78e8afab1e2cc577722bddfcb001ac1bae3df9790b36700676e095" gracePeriod=30 Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.068165 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" containerName="sg-core" containerID="cri-o://4c78635e88ffc50cb76174a91f620a324f579e0c56181a8a4d6d9312a4b81d4b" gracePeriod=30 Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.068209 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" containerName="proxy-httpd" containerID="cri-o://4199dbc803e76efad2b01d09b01976cd73c96ef2282fd5d932840a4833b52cff" gracePeriod=30 Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.068238 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" containerName="ceilometer-notification-agent" containerID="cri-o://d60dbc8e9e816078cd2978fc224ebc7f601dd63e3098705808f6e4c56d6fadd2" gracePeriod=30 Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.087696 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.199:3000/\": EOF" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.093142 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-b930-account-create-update-l42mn"] Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.095046 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b930-account-create-update-l42mn" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.103491 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.107761 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b930-account-create-update-l42mn"] Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.133603 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08fa7b7-2bf5-4695-be3c-4a455172a896-operator-scripts\") pod \"nova-api-7e7f-account-create-update-f9glz\" (UID: \"b08fa7b7-2bf5-4695-be3c-4a455172a896\") " pod="openstack/nova-api-7e7f-account-create-update-f9glz" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.133869 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6zpq\" (UniqueName: \"kubernetes.io/projected/b08fa7b7-2bf5-4695-be3c-4a455172a896-kube-api-access-q6zpq\") pod \"nova-api-7e7f-account-create-update-f9glz\" (UID: \"b08fa7b7-2bf5-4695-be3c-4a455172a896\") " pod="openstack/nova-api-7e7f-account-create-update-f9glz" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.136198 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08fa7b7-2bf5-4695-be3c-4a455172a896-operator-scripts\") pod \"nova-api-7e7f-account-create-update-f9glz\" (UID: \"b08fa7b7-2bf5-4695-be3c-4a455172a896\") " pod="openstack/nova-api-7e7f-account-create-update-f9glz" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.136890 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-5lhks" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.160114 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6zpq\" (UniqueName: \"kubernetes.io/projected/b08fa7b7-2bf5-4695-be3c-4a455172a896-kube-api-access-q6zpq\") pod \"nova-api-7e7f-account-create-update-f9glz\" (UID: \"b08fa7b7-2bf5-4695-be3c-4a455172a896\") " pod="openstack/nova-api-7e7f-account-create-update-f9glz" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.218524 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7e7f-account-create-update-f9glz" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.235750 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e828b00-d636-4aee-8eae-5bb405a342d7-operator-scripts\") pod \"nova-cell0-b930-account-create-update-l42mn\" (UID: \"8e828b00-d636-4aee-8eae-5bb405a342d7\") " pod="openstack/nova-cell0-b930-account-create-update-l42mn" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.235997 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b85fb\" (UniqueName: \"kubernetes.io/projected/8e828b00-d636-4aee-8eae-5bb405a342d7-kube-api-access-b85fb\") pod \"nova-cell0-b930-account-create-update-l42mn\" (UID: \"8e828b00-d636-4aee-8eae-5bb405a342d7\") " pod="openstack/nova-cell0-b930-account-create-update-l42mn" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.284726 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-1b78-account-create-update-c757l"] Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.287089 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1b78-account-create-update-c757l" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.287174 4693 generic.go:334] "Generic (PLEG): container finished" podID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" containerID="4c78635e88ffc50cb76174a91f620a324f579e0c56181a8a4d6d9312a4b81d4b" exitCode=2 Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.287335 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9b67354-eb2d-4ade-bc4a-096d9e0b9791","Type":"ContainerDied","Data":"4c78635e88ffc50cb76174a91f620a324f579e0c56181a8a4d6d9312a4b81d4b"} Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.290056 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.312948 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1b78-account-create-update-c757l"] Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.339525 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b85fb\" (UniqueName: \"kubernetes.io/projected/8e828b00-d636-4aee-8eae-5bb405a342d7-kube-api-access-b85fb\") pod \"nova-cell0-b930-account-create-update-l42mn\" (UID: \"8e828b00-d636-4aee-8eae-5bb405a342d7\") " pod="openstack/nova-cell0-b930-account-create-update-l42mn" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.339618 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e828b00-d636-4aee-8eae-5bb405a342d7-operator-scripts\") pod \"nova-cell0-b930-account-create-update-l42mn\" (UID: \"8e828b00-d636-4aee-8eae-5bb405a342d7\") " pod="openstack/nova-cell0-b930-account-create-update-l42mn" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.340451 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e828b00-d636-4aee-8eae-5bb405a342d7-operator-scripts\") pod \"nova-cell0-b930-account-create-update-l42mn\" (UID: \"8e828b00-d636-4aee-8eae-5bb405a342d7\") " pod="openstack/nova-cell0-b930-account-create-update-l42mn" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.379136 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b85fb\" (UniqueName: \"kubernetes.io/projected/8e828b00-d636-4aee-8eae-5bb405a342d7-kube-api-access-b85fb\") pod \"nova-cell0-b930-account-create-update-l42mn\" (UID: \"8e828b00-d636-4aee-8eae-5bb405a342d7\") " pod="openstack/nova-cell0-b930-account-create-update-l42mn" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.423402 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b930-account-create-update-l42mn" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.442489 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b228d\" (UniqueName: \"kubernetes.io/projected/1f3fc160-512e-4b69-8997-696c7b80c676-kube-api-access-b228d\") pod \"nova-cell1-1b78-account-create-update-c757l\" (UID: \"1f3fc160-512e-4b69-8997-696c7b80c676\") " pod="openstack/nova-cell1-1b78-account-create-update-c757l" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.442577 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f3fc160-512e-4b69-8997-696c7b80c676-operator-scripts\") pod \"nova-cell1-1b78-account-create-update-c757l\" (UID: \"1f3fc160-512e-4b69-8997-696c7b80c676\") " pod="openstack/nova-cell1-1b78-account-create-update-c757l" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.544479 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f3fc160-512e-4b69-8997-696c7b80c676-operator-scripts\") pod \"nova-cell1-1b78-account-create-update-c757l\" (UID: \"1f3fc160-512e-4b69-8997-696c7b80c676\") " pod="openstack/nova-cell1-1b78-account-create-update-c757l" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.545136 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f3fc160-512e-4b69-8997-696c7b80c676-operator-scripts\") pod \"nova-cell1-1b78-account-create-update-c757l\" (UID: \"1f3fc160-512e-4b69-8997-696c7b80c676\") " pod="openstack/nova-cell1-1b78-account-create-update-c757l" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.547630 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b228d\" (UniqueName: \"kubernetes.io/projected/1f3fc160-512e-4b69-8997-696c7b80c676-kube-api-access-b228d\") pod \"nova-cell1-1b78-account-create-update-c757l\" (UID: \"1f3fc160-512e-4b69-8997-696c7b80c676\") " pod="openstack/nova-cell1-1b78-account-create-update-c757l" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.570076 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b228d\" (UniqueName: \"kubernetes.io/projected/1f3fc160-512e-4b69-8997-696c7b80c676-kube-api-access-b228d\") pod \"nova-cell1-1b78-account-create-update-c757l\" (UID: \"1f3fc160-512e-4b69-8997-696c7b80c676\") " pod="openstack/nova-cell1-1b78-account-create-update-c757l" Dec 12 16:13:43 crc kubenswrapper[4693]: I1212 16:13:43.620897 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1b78-account-create-update-c757l" Dec 12 16:13:44 crc kubenswrapper[4693]: I1212 16:13:44.347028 4693 generic.go:334] "Generic (PLEG): container finished" podID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" containerID="4199dbc803e76efad2b01d09b01976cd73c96ef2282fd5d932840a4833b52cff" exitCode=0 Dec 12 16:13:44 crc kubenswrapper[4693]: I1212 16:13:44.347325 4693 generic.go:334] "Generic (PLEG): container finished" podID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" containerID="3167185dcb78e8afab1e2cc577722bddfcb001ac1bae3df9790b36700676e095" exitCode=0 Dec 12 16:13:44 crc kubenswrapper[4693]: I1212 16:13:44.347344 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9b67354-eb2d-4ade-bc4a-096d9e0b9791","Type":"ContainerDied","Data":"4199dbc803e76efad2b01d09b01976cd73c96ef2282fd5d932840a4833b52cff"} Dec 12 16:13:44 crc kubenswrapper[4693]: I1212 16:13:44.347370 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9b67354-eb2d-4ade-bc4a-096d9e0b9791","Type":"ContainerDied","Data":"3167185dcb78e8afab1e2cc577722bddfcb001ac1bae3df9790b36700676e095"} Dec 12 16:13:44 crc kubenswrapper[4693]: I1212 16:13:44.826645 4693 scope.go:117] "RemoveContainer" containerID="b4deb5aea30f8ab43818a50102d17bd858460c0087ae21f029423112e36ba764" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.080784 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-75f68d46b9-kwdl9"] Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.083527 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-75f68d46b9-kwdl9" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.105899 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-776c75b6d4-cbj4w"] Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.117771 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-combined-ca-bundle\") pod \"heat-api-75f68d46b9-kwdl9\" (UID: \"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a\") " pod="openstack/heat-api-75f68d46b9-kwdl9" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.118023 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-config-data\") pod \"heat-api-75f68d46b9-kwdl9\" (UID: \"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a\") " pod="openstack/heat-api-75f68d46b9-kwdl9" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.118154 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-config-data-custom\") pod \"heat-api-75f68d46b9-kwdl9\" (UID: \"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a\") " pod="openstack/heat-api-75f68d46b9-kwdl9" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.118211 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqdp8\" (UniqueName: \"kubernetes.io/projected/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-kube-api-access-hqdp8\") pod \"heat-api-75f68d46b9-kwdl9\" (UID: \"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a\") " pod="openstack/heat-api-75f68d46b9-kwdl9" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.142339 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-75f68d46b9-kwdl9"] Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.142455 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-776c75b6d4-cbj4w" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.155781 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-d58f4676c-v22mp"] Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.165211 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d58f4676c-v22mp" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.174909 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-776c75b6d4-cbj4w"] Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.217678 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-d58f4676c-v22mp"] Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.220494 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjcw9\" (UniqueName: \"kubernetes.io/projected/6c84420b-2157-4931-86de-b1c55c4e7f0c-kube-api-access-kjcw9\") pod \"heat-cfnapi-d58f4676c-v22mp\" (UID: \"6c84420b-2157-4931-86de-b1c55c4e7f0c\") " pod="openstack/heat-cfnapi-d58f4676c-v22mp" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.220588 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c84420b-2157-4931-86de-b1c55c4e7f0c-combined-ca-bundle\") pod \"heat-cfnapi-d58f4676c-v22mp\" (UID: \"6c84420b-2157-4931-86de-b1c55c4e7f0c\") " pod="openstack/heat-cfnapi-d58f4676c-v22mp" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.220948 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b98312-4447-4e00-b457-c724c0b623d3-combined-ca-bundle\") pod \"heat-engine-776c75b6d4-cbj4w\" (UID: \"77b98312-4447-4e00-b457-c724c0b623d3\") " pod="openstack/heat-engine-776c75b6d4-cbj4w" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.220985 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77b98312-4447-4e00-b457-c724c0b623d3-config-data-custom\") pod \"heat-engine-776c75b6d4-cbj4w\" (UID: \"77b98312-4447-4e00-b457-c724c0b623d3\") " pod="openstack/heat-engine-776c75b6d4-cbj4w" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.221022 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c84420b-2157-4931-86de-b1c55c4e7f0c-config-data-custom\") pod \"heat-cfnapi-d58f4676c-v22mp\" (UID: \"6c84420b-2157-4931-86de-b1c55c4e7f0c\") " pod="openstack/heat-cfnapi-d58f4676c-v22mp" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.221075 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-combined-ca-bundle\") pod \"heat-api-75f68d46b9-kwdl9\" (UID: \"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a\") " pod="openstack/heat-api-75f68d46b9-kwdl9" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.221146 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-config-data\") pod \"heat-api-75f68d46b9-kwdl9\" (UID: \"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a\") " pod="openstack/heat-api-75f68d46b9-kwdl9" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.221343 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-config-data-custom\") pod \"heat-api-75f68d46b9-kwdl9\" (UID: \"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a\") " pod="openstack/heat-api-75f68d46b9-kwdl9" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.221365 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c84420b-2157-4931-86de-b1c55c4e7f0c-config-data\") pod \"heat-cfnapi-d58f4676c-v22mp\" (UID: \"6c84420b-2157-4931-86de-b1c55c4e7f0c\") " pod="openstack/heat-cfnapi-d58f4676c-v22mp" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.221386 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqdp8\" (UniqueName: \"kubernetes.io/projected/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-kube-api-access-hqdp8\") pod \"heat-api-75f68d46b9-kwdl9\" (UID: \"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a\") " pod="openstack/heat-api-75f68d46b9-kwdl9" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.221408 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b98312-4447-4e00-b457-c724c0b623d3-config-data\") pod \"heat-engine-776c75b6d4-cbj4w\" (UID: \"77b98312-4447-4e00-b457-c724c0b623d3\") " pod="openstack/heat-engine-776c75b6d4-cbj4w" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.221982 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbfft\" (UniqueName: \"kubernetes.io/projected/77b98312-4447-4e00-b457-c724c0b623d3-kube-api-access-cbfft\") pod \"heat-engine-776c75b6d4-cbj4w\" (UID: \"77b98312-4447-4e00-b457-c724c0b623d3\") " pod="openstack/heat-engine-776c75b6d4-cbj4w" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.232834 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-combined-ca-bundle\") pod \"heat-api-75f68d46b9-kwdl9\" (UID: \"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a\") " pod="openstack/heat-api-75f68d46b9-kwdl9" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.233249 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-config-data\") pod \"heat-api-75f68d46b9-kwdl9\" (UID: \"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a\") " pod="openstack/heat-api-75f68d46b9-kwdl9" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.238195 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-config-data-custom\") pod \"heat-api-75f68d46b9-kwdl9\" (UID: \"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a\") " pod="openstack/heat-api-75f68d46b9-kwdl9" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.254075 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqdp8\" (UniqueName: \"kubernetes.io/projected/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-kube-api-access-hqdp8\") pod \"heat-api-75f68d46b9-kwdl9\" (UID: \"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a\") " pod="openstack/heat-api-75f68d46b9-kwdl9" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.444358 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-75f68d46b9-kwdl9" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.449041 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c84420b-2157-4931-86de-b1c55c4e7f0c-config-data\") pod \"heat-cfnapi-d58f4676c-v22mp\" (UID: \"6c84420b-2157-4931-86de-b1c55c4e7f0c\") " pod="openstack/heat-cfnapi-d58f4676c-v22mp" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.449104 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b98312-4447-4e00-b457-c724c0b623d3-config-data\") pod \"heat-engine-776c75b6d4-cbj4w\" (UID: \"77b98312-4447-4e00-b457-c724c0b623d3\") " pod="openstack/heat-engine-776c75b6d4-cbj4w" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.449129 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbfft\" (UniqueName: \"kubernetes.io/projected/77b98312-4447-4e00-b457-c724c0b623d3-kube-api-access-cbfft\") pod \"heat-engine-776c75b6d4-cbj4w\" (UID: \"77b98312-4447-4e00-b457-c724c0b623d3\") " pod="openstack/heat-engine-776c75b6d4-cbj4w" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.449174 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjcw9\" (UniqueName: \"kubernetes.io/projected/6c84420b-2157-4931-86de-b1c55c4e7f0c-kube-api-access-kjcw9\") pod \"heat-cfnapi-d58f4676c-v22mp\" (UID: \"6c84420b-2157-4931-86de-b1c55c4e7f0c\") " pod="openstack/heat-cfnapi-d58f4676c-v22mp" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.449240 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c84420b-2157-4931-86de-b1c55c4e7f0c-combined-ca-bundle\") pod \"heat-cfnapi-d58f4676c-v22mp\" (UID: \"6c84420b-2157-4931-86de-b1c55c4e7f0c\") " pod="openstack/heat-cfnapi-d58f4676c-v22mp" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.449311 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b98312-4447-4e00-b457-c724c0b623d3-combined-ca-bundle\") pod \"heat-engine-776c75b6d4-cbj4w\" (UID: \"77b98312-4447-4e00-b457-c724c0b623d3\") " pod="openstack/heat-engine-776c75b6d4-cbj4w" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.449352 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77b98312-4447-4e00-b457-c724c0b623d3-config-data-custom\") pod \"heat-engine-776c75b6d4-cbj4w\" (UID: \"77b98312-4447-4e00-b457-c724c0b623d3\") " pod="openstack/heat-engine-776c75b6d4-cbj4w" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.449401 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c84420b-2157-4931-86de-b1c55c4e7f0c-config-data-custom\") pod \"heat-cfnapi-d58f4676c-v22mp\" (UID: \"6c84420b-2157-4931-86de-b1c55c4e7f0c\") " pod="openstack/heat-cfnapi-d58f4676c-v22mp" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.456685 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c84420b-2157-4931-86de-b1c55c4e7f0c-config-data\") pod \"heat-cfnapi-d58f4676c-v22mp\" (UID: \"6c84420b-2157-4931-86de-b1c55c4e7f0c\") " pod="openstack/heat-cfnapi-d58f4676c-v22mp" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.462187 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b98312-4447-4e00-b457-c724c0b623d3-config-data\") pod \"heat-engine-776c75b6d4-cbj4w\" (UID: \"77b98312-4447-4e00-b457-c724c0b623d3\") " pod="openstack/heat-engine-776c75b6d4-cbj4w" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.466667 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c84420b-2157-4931-86de-b1c55c4e7f0c-combined-ca-bundle\") pod \"heat-cfnapi-d58f4676c-v22mp\" (UID: \"6c84420b-2157-4931-86de-b1c55c4e7f0c\") " pod="openstack/heat-cfnapi-d58f4676c-v22mp" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.471124 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c84420b-2157-4931-86de-b1c55c4e7f0c-config-data-custom\") pod \"heat-cfnapi-d58f4676c-v22mp\" (UID: \"6c84420b-2157-4931-86de-b1c55c4e7f0c\") " pod="openstack/heat-cfnapi-d58f4676c-v22mp" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.471427 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77b98312-4447-4e00-b457-c724c0b623d3-config-data-custom\") pod \"heat-engine-776c75b6d4-cbj4w\" (UID: \"77b98312-4447-4e00-b457-c724c0b623d3\") " pod="openstack/heat-engine-776c75b6d4-cbj4w" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.482421 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b98312-4447-4e00-b457-c724c0b623d3-combined-ca-bundle\") pod \"heat-engine-776c75b6d4-cbj4w\" (UID: \"77b98312-4447-4e00-b457-c724c0b623d3\") " pod="openstack/heat-engine-776c75b6d4-cbj4w" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.486240 4693 generic.go:334] "Generic (PLEG): container finished" podID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" containerID="d60dbc8e9e816078cd2978fc224ebc7f601dd63e3098705808f6e4c56d6fadd2" exitCode=0 Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.486315 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9b67354-eb2d-4ade-bc4a-096d9e0b9791","Type":"ContainerDied","Data":"d60dbc8e9e816078cd2978fc224ebc7f601dd63e3098705808f6e4c56d6fadd2"} Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.488318 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbfft\" (UniqueName: \"kubernetes.io/projected/77b98312-4447-4e00-b457-c724c0b623d3-kube-api-access-cbfft\") pod \"heat-engine-776c75b6d4-cbj4w\" (UID: \"77b98312-4447-4e00-b457-c724c0b623d3\") " pod="openstack/heat-engine-776c75b6d4-cbj4w" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.494989 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjcw9\" (UniqueName: \"kubernetes.io/projected/6c84420b-2157-4931-86de-b1c55c4e7f0c-kube-api-access-kjcw9\") pod \"heat-cfnapi-d58f4676c-v22mp\" (UID: \"6c84420b-2157-4931-86de-b1c55c4e7f0c\") " pod="openstack/heat-cfnapi-d58f4676c-v22mp" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.520527 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-776c75b6d4-cbj4w" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.526330 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d58f4676c-v22mp" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.865776 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.199:3000/\": dial tcp 10.217.0.199:3000: connect: connection refused" Dec 12 16:13:45 crc kubenswrapper[4693]: I1212 16:13:45.870753 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.378099 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-56c75966f4-fvrkb"] Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.382777 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6469c4dff9-vwxp5"] Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.438745 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5f78f47478-pkflg"] Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.441163 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.448196 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.448492 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.462075 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-798ccfcf74-j5zqf"] Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.472419 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.476611 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.477023 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.538499 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6cm6\" (UniqueName: \"kubernetes.io/projected/e03ab816-746e-449a-8b01-e627b71362d3-kube-api-access-h6cm6\") pod \"heat-cfnapi-5f78f47478-pkflg\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.538730 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-config-data-custom\") pod \"heat-cfnapi-5f78f47478-pkflg\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.538934 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-combined-ca-bundle\") pod \"heat-cfnapi-5f78f47478-pkflg\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.539026 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-config-data\") pod \"heat-cfnapi-5f78f47478-pkflg\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.539070 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-public-tls-certs\") pod \"heat-cfnapi-5f78f47478-pkflg\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.539849 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-internal-tls-certs\") pod \"heat-cfnapi-5f78f47478-pkflg\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.555621 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5f78f47478-pkflg"] Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.638190 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-798ccfcf74-j5zqf"] Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.642541 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-combined-ca-bundle\") pod \"heat-cfnapi-5f78f47478-pkflg\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.642595 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddnc4\" (UniqueName: \"kubernetes.io/projected/d39e7363-35e4-4586-a52a-ad6b28e98eb7-kube-api-access-ddnc4\") pod \"heat-api-798ccfcf74-j5zqf\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.642638 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-config-data\") pod \"heat-cfnapi-5f78f47478-pkflg\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.642664 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-public-tls-certs\") pod \"heat-cfnapi-5f78f47478-pkflg\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.642697 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-public-tls-certs\") pod \"heat-api-798ccfcf74-j5zqf\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.642725 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-config-data\") pod \"heat-api-798ccfcf74-j5zqf\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.642752 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-internal-tls-certs\") pod \"heat-cfnapi-5f78f47478-pkflg\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.642778 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-config-data-custom\") pod \"heat-api-798ccfcf74-j5zqf\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.643047 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-internal-tls-certs\") pod \"heat-api-798ccfcf74-j5zqf\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.643084 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6cm6\" (UniqueName: \"kubernetes.io/projected/e03ab816-746e-449a-8b01-e627b71362d3-kube-api-access-h6cm6\") pod \"heat-cfnapi-5f78f47478-pkflg\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.643131 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-combined-ca-bundle\") pod \"heat-api-798ccfcf74-j5zqf\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.643188 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-config-data-custom\") pod \"heat-cfnapi-5f78f47478-pkflg\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.734438 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-config-data-custom\") pod \"heat-cfnapi-5f78f47478-pkflg\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.735726 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-internal-tls-certs\") pod \"heat-cfnapi-5f78f47478-pkflg\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.735735 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-public-tls-certs\") pod \"heat-cfnapi-5f78f47478-pkflg\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.737864 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6cm6\" (UniqueName: \"kubernetes.io/projected/e03ab816-746e-449a-8b01-e627b71362d3-kube-api-access-h6cm6\") pod \"heat-cfnapi-5f78f47478-pkflg\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.739770 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-combined-ca-bundle\") pod \"heat-cfnapi-5f78f47478-pkflg\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.741313 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-config-data\") pod \"heat-cfnapi-5f78f47478-pkflg\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.758102 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddnc4\" (UniqueName: \"kubernetes.io/projected/d39e7363-35e4-4586-a52a-ad6b28e98eb7-kube-api-access-ddnc4\") pod \"heat-api-798ccfcf74-j5zqf\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.758412 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-public-tls-certs\") pod \"heat-api-798ccfcf74-j5zqf\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.758460 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-config-data\") pod \"heat-api-798ccfcf74-j5zqf\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.758496 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-config-data-custom\") pod \"heat-api-798ccfcf74-j5zqf\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.758661 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-internal-tls-certs\") pod \"heat-api-798ccfcf74-j5zqf\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.758751 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-combined-ca-bundle\") pod \"heat-api-798ccfcf74-j5zqf\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.765095 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-combined-ca-bundle\") pod \"heat-api-798ccfcf74-j5zqf\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.767608 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-config-data\") pod \"heat-api-798ccfcf74-j5zqf\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.780223 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-public-tls-certs\") pod \"heat-api-798ccfcf74-j5zqf\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.782229 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-internal-tls-certs\") pod \"heat-api-798ccfcf74-j5zqf\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.787182 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddnc4\" (UniqueName: \"kubernetes.io/projected/d39e7363-35e4-4586-a52a-ad6b28e98eb7-kube-api-access-ddnc4\") pod \"heat-api-798ccfcf74-j5zqf\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.795394 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-config-data-custom\") pod \"heat-api-798ccfcf74-j5zqf\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.821997 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:13:47 crc kubenswrapper[4693]: I1212 16:13:47.866402 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:13:48 crc kubenswrapper[4693]: I1212 16:13:48.654564 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:13:48 crc kubenswrapper[4693]: I1212 16:13:48.804926 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nw2pb"] Dec 12 16:13:48 crc kubenswrapper[4693]: I1212 16:13:48.805422 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" podUID="e73400cc-c2d1-4e09-94b2-38ad2d5a3058" containerName="dnsmasq-dns" containerID="cri-o://dee514335d58c5942ca31bc163d90679b98e6a7c1885fcd4ea02437a78426476" gracePeriod=10 Dec 12 16:13:48 crc kubenswrapper[4693]: I1212 16:13:48.814661 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:48 crc kubenswrapper[4693]: I1212 16:13:48.827172 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6bd4874f5f-5jsgt" Dec 12 16:13:49 crc kubenswrapper[4693]: I1212 16:13:49.624444 4693 generic.go:334] "Generic (PLEG): container finished" podID="e73400cc-c2d1-4e09-94b2-38ad2d5a3058" containerID="dee514335d58c5942ca31bc163d90679b98e6a7c1885fcd4ea02437a78426476" exitCode=0 Dec 12 16:13:49 crc kubenswrapper[4693]: I1212 16:13:49.624542 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" event={"ID":"e73400cc-c2d1-4e09-94b2-38ad2d5a3058","Type":"ContainerDied","Data":"dee514335d58c5942ca31bc163d90679b98e6a7c1885fcd4ea02437a78426476"} Dec 12 16:13:50 crc kubenswrapper[4693]: I1212 16:13:50.383269 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" podUID="e73400cc-c2d1-4e09-94b2-38ad2d5a3058" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.201:5353: connect: connection refused" Dec 12 16:13:51 crc kubenswrapper[4693]: E1212 16:13:51.950419 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Dec 12 16:13:51 crc kubenswrapper[4693]: E1212 16:13:51.951384 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7ch576h65h96h669h57h77h5b5h58fh64bhcch5cbh66ch67ch75hd4hf6h546hbch9ch5b8h585h555h65dh6ch555hcdh575h667hf6hbdhd9q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbthj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(aa5c116e-ba6c-42ba-b865-b32b51104014): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:13:51 crc kubenswrapper[4693]: E1212 16:13:51.953026 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="aa5c116e-ba6c-42ba-b865-b32b51104014" Dec 12 16:13:52 crc kubenswrapper[4693]: E1212 16:13:52.664726 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified" Dec 12 16:13:52 crc kubenswrapper[4693]: E1212 16:13:52.665475 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-cfnapi,Image:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_httpd_setup && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n57hc7h67fhd8h7dh86hf6h558h6bhc7h676h657h68ch5bbhc5h75h654h697h665h668hb4h64fh5b5hb4h79h78h5b7h644h58dh5b5h568h5dcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:heat-cfnapi-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-custom,ReadOnly:true,MountPath:/etc/heat/heat.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kw6v7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:10,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:10,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-cfnapi-56c75966f4-fvrkb_openstack(317fe3cf-1373-4cfe-9cd9-6d80050d4c3c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:13:52 crc kubenswrapper[4693]: E1212 16:13:52.666794 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-cfnapi-56c75966f4-fvrkb" podUID="317fe3cf-1373-4cfe-9cd9-6d80050d4c3c" Dec 12 16:13:52 crc kubenswrapper[4693]: E1212 16:13:52.687509 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="aa5c116e-ba6c-42ba-b865-b32b51104014" Dec 12 16:13:53 crc kubenswrapper[4693]: E1212 16:13:53.078938 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-api:current-podified" Dec 12 16:13:53 crc kubenswrapper[4693]: E1212 16:13:53.079138 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-api,Image:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_httpd_setup && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc5h57ch54h88h5ddh8h9bhch599h644h5c9hbh76h688h68bh655hd5hdch5b8h5d8h687h55bh9ch559h86h699h67bh547h674h6bh59bh5dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:heat-api-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-custom,ReadOnly:true,MountPath:/etc/heat/heat.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9njzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck,Port:{0 8004 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:10,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck,Port:{0 8004 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:10,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-api-6469c4dff9-vwxp5_openstack(70eecfd3-212d-4a52-9793-811a49b2020c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:13:53 crc kubenswrapper[4693]: E1212 16:13:53.080384 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-api-6469c4dff9-vwxp5" podUID="70eecfd3-212d-4a52-9793-811a49b2020c" Dec 12 16:13:53 crc kubenswrapper[4693]: I1212 16:13:53.703678 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9b67354-eb2d-4ade-bc4a-096d9e0b9791","Type":"ContainerDied","Data":"2e8b9ada4fc38cb9f1f4d40b6dc8248d81ab80efd419cde1f72e0780f3e0650a"} Dec 12 16:13:53 crc kubenswrapper[4693]: I1212 16:13:53.704430 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e8b9ada4fc38cb9f1f4d40b6dc8248d81ab80efd419cde1f72e0780f3e0650a" Dec 12 16:13:53 crc kubenswrapper[4693]: I1212 16:13:53.707550 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" event={"ID":"e73400cc-c2d1-4e09-94b2-38ad2d5a3058","Type":"ContainerDied","Data":"c593edfb17c6c8505024658c98f1ebf93850ca903ea476a2b8085bb6cb19f5e3"} Dec 12 16:13:53 crc kubenswrapper[4693]: I1212 16:13:53.707589 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c593edfb17c6c8505024658c98f1ebf93850ca903ea476a2b8085bb6cb19f5e3" Dec 12 16:13:53 crc kubenswrapper[4693]: I1212 16:13:53.818917 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:13:53 crc kubenswrapper[4693]: I1212 16:13:53.879167 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:53 crc kubenswrapper[4693]: I1212 16:13:53.972089 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-config-data\") pod \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " Dec 12 16:13:53 crc kubenswrapper[4693]: I1212 16:13:53.972162 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-dns-svc\") pod \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " Dec 12 16:13:53 crc kubenswrapper[4693]: I1212 16:13:53.972193 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-scripts\") pod \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " Dec 12 16:13:53 crc kubenswrapper[4693]: I1212 16:13:53.972225 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-run-httpd\") pod \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " Dec 12 16:13:53 crc kubenswrapper[4693]: I1212 16:13:53.972321 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2s6f\" (UniqueName: \"kubernetes.io/projected/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-kube-api-access-b2s6f\") pod \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " Dec 12 16:13:53 crc kubenswrapper[4693]: I1212 16:13:53.972361 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-sg-core-conf-yaml\") pod \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " Dec 12 16:13:53 crc kubenswrapper[4693]: I1212 16:13:53.972420 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-config\") pod \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " Dec 12 16:13:53 crc kubenswrapper[4693]: I1212 16:13:53.972478 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-combined-ca-bundle\") pod \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " Dec 12 16:13:53 crc kubenswrapper[4693]: I1212 16:13:53.972632 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-dns-swift-storage-0\") pod \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " Dec 12 16:13:53 crc kubenswrapper[4693]: I1212 16:13:53.972660 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-ovsdbserver-nb\") pod \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " Dec 12 16:13:53 crc kubenswrapper[4693]: I1212 16:13:53.972688 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-ovsdbserver-sb\") pod \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " Dec 12 16:13:53 crc kubenswrapper[4693]: I1212 16:13:53.972748 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vddx\" (UniqueName: \"kubernetes.io/projected/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-kube-api-access-8vddx\") pod \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\" (UID: \"e73400cc-c2d1-4e09-94b2-38ad2d5a3058\") " Dec 12 16:13:53 crc kubenswrapper[4693]: I1212 16:13:53.972779 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-log-httpd\") pod \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\" (UID: \"b9b67354-eb2d-4ade-bc4a-096d9e0b9791\") " Dec 12 16:13:53 crc kubenswrapper[4693]: I1212 16:13:53.975400 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b9b67354-eb2d-4ade-bc4a-096d9e0b9791" (UID: "b9b67354-eb2d-4ade-bc4a-096d9e0b9791"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:13:53 crc kubenswrapper[4693]: I1212 16:13:53.985128 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-scripts" (OuterVolumeSpecName: "scripts") pod "b9b67354-eb2d-4ade-bc4a-096d9e0b9791" (UID: "b9b67354-eb2d-4ade-bc4a-096d9e0b9791"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:53 crc kubenswrapper[4693]: I1212 16:13:53.986421 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b9b67354-eb2d-4ade-bc4a-096d9e0b9791" (UID: "b9b67354-eb2d-4ade-bc4a-096d9e0b9791"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.009907 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-kube-api-access-b2s6f" (OuterVolumeSpecName: "kube-api-access-b2s6f") pod "b9b67354-eb2d-4ade-bc4a-096d9e0b9791" (UID: "b9b67354-eb2d-4ade-bc4a-096d9e0b9791"). InnerVolumeSpecName "kube-api-access-b2s6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.013764 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-kube-api-access-8vddx" (OuterVolumeSpecName: "kube-api-access-8vddx") pod "e73400cc-c2d1-4e09-94b2-38ad2d5a3058" (UID: "e73400cc-c2d1-4e09-94b2-38ad2d5a3058"). InnerVolumeSpecName "kube-api-access-8vddx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.077593 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vddx\" (UniqueName: \"kubernetes.io/projected/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-kube-api-access-8vddx\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.077621 4693 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.077631 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.077640 4693 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.077649 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2s6f\" (UniqueName: \"kubernetes.io/projected/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-kube-api-access-b2s6f\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.138811 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b9b67354-eb2d-4ade-bc4a-096d9e0b9791" (UID: "b9b67354-eb2d-4ade-bc4a-096d9e0b9791"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.148967 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e73400cc-c2d1-4e09-94b2-38ad2d5a3058" (UID: "e73400cc-c2d1-4e09-94b2-38ad2d5a3058"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.166545 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-config" (OuterVolumeSpecName: "config") pod "e73400cc-c2d1-4e09-94b2-38ad2d5a3058" (UID: "e73400cc-c2d1-4e09-94b2-38ad2d5a3058"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.182749 4693 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.182793 4693 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.182853 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.199081 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e73400cc-c2d1-4e09-94b2-38ad2d5a3058" (UID: "e73400cc-c2d1-4e09-94b2-38ad2d5a3058"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.218413 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e73400cc-c2d1-4e09-94b2-38ad2d5a3058" (UID: "e73400cc-c2d1-4e09-94b2-38ad2d5a3058"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.233412 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e73400cc-c2d1-4e09-94b2-38ad2d5a3058" (UID: "e73400cc-c2d1-4e09-94b2-38ad2d5a3058"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.285633 4693 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.285669 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.285681 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e73400cc-c2d1-4e09-94b2-38ad2d5a3058-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.298176 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9b67354-eb2d-4ade-bc4a-096d9e0b9791" (UID: "b9b67354-eb2d-4ade-bc4a-096d9e0b9791"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.312353 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-config-data" (OuterVolumeSpecName: "config-data") pod "b9b67354-eb2d-4ade-bc4a-096d9e0b9791" (UID: "b9b67354-eb2d-4ade-bc4a-096d9e0b9791"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.357389 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:13:54 crc kubenswrapper[4693]: E1212 16:13:54.357985 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.390170 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.390207 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b67354-eb2d-4ade-bc4a-096d9e0b9791-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:54 crc kubenswrapper[4693]: W1212 16:13:54.579988 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod187aaad3_d0f1_4f94_8ee3_05b2df3da7f4.slice/crio-1933af74b47f375aceee4d1488a93f803646b617de123a597b361bd54572a6d0 WatchSource:0}: Error finding container 1933af74b47f375aceee4d1488a93f803646b617de123a597b361bd54572a6d0: Status 404 returned error can't find the container with id 1933af74b47f375aceee4d1488a93f803646b617de123a597b361bd54572a6d0 Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.587534 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-798ccfcf74-j5zqf"] Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.601507 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-v7mxb"] Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.730356 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-798ccfcf74-j5zqf" event={"ID":"d39e7363-35e4-4586-a52a-ad6b28e98eb7","Type":"ContainerStarted","Data":"5735a4444d47974dd960a63d8bcf9b121c0c9829cea030232bda49278c4754e9"} Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.732150 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nw2pb" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.734811 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-v7mxb" event={"ID":"187aaad3-d0f1-4f94-8ee3-05b2df3da7f4","Type":"ContainerStarted","Data":"1933af74b47f375aceee4d1488a93f803646b617de123a597b361bd54572a6d0"} Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.735021 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.878745 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nw2pb"] Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.944258 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nw2pb"] Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.978513 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:13:54 crc kubenswrapper[4693]: I1212 16:13:54.993392 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.025914 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:13:55 crc kubenswrapper[4693]: E1212 16:13:55.026719 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" containerName="ceilometer-central-agent" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.026782 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" containerName="ceilometer-central-agent" Dec 12 16:13:55 crc kubenswrapper[4693]: E1212 16:13:55.026807 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e73400cc-c2d1-4e09-94b2-38ad2d5a3058" containerName="init" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.026815 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="e73400cc-c2d1-4e09-94b2-38ad2d5a3058" containerName="init" Dec 12 16:13:55 crc kubenswrapper[4693]: E1212 16:13:55.026832 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e73400cc-c2d1-4e09-94b2-38ad2d5a3058" containerName="dnsmasq-dns" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.026839 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="e73400cc-c2d1-4e09-94b2-38ad2d5a3058" containerName="dnsmasq-dns" Dec 12 16:13:55 crc kubenswrapper[4693]: E1212 16:13:55.026862 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" containerName="proxy-httpd" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.026870 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" containerName="proxy-httpd" Dec 12 16:13:55 crc kubenswrapper[4693]: E1212 16:13:55.026893 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" containerName="sg-core" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.026904 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" containerName="sg-core" Dec 12 16:13:55 crc kubenswrapper[4693]: E1212 16:13:55.026929 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" containerName="ceilometer-notification-agent" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.026937 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" containerName="ceilometer-notification-agent" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.027282 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" containerName="ceilometer-notification-agent" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.027304 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" containerName="proxy-httpd" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.027335 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" containerName="ceilometer-central-agent" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.027351 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="e73400cc-c2d1-4e09-94b2-38ad2d5a3058" containerName="dnsmasq-dns" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.027367 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" containerName="sg-core" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.030954 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.035998 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.037122 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6469c4dff9-vwxp5" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.037308 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.037389 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.072465 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jq8js"] Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.084691 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-56c75966f4-fvrkb" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.100613 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1b78-account-create-update-c757l"] Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.120761 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-75f68d46b9-kwdl9"] Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.222524 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70eecfd3-212d-4a52-9793-811a49b2020c-config-data\") pod \"70eecfd3-212d-4a52-9793-811a49b2020c\" (UID: \"70eecfd3-212d-4a52-9793-811a49b2020c\") " Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.222621 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw6v7\" (UniqueName: \"kubernetes.io/projected/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-kube-api-access-kw6v7\") pod \"317fe3cf-1373-4cfe-9cd9-6d80050d4c3c\" (UID: \"317fe3cf-1373-4cfe-9cd9-6d80050d4c3c\") " Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.222819 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70eecfd3-212d-4a52-9793-811a49b2020c-config-data-custom\") pod \"70eecfd3-212d-4a52-9793-811a49b2020c\" (UID: \"70eecfd3-212d-4a52-9793-811a49b2020c\") " Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.222982 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70eecfd3-212d-4a52-9793-811a49b2020c-combined-ca-bundle\") pod \"70eecfd3-212d-4a52-9793-811a49b2020c\" (UID: \"70eecfd3-212d-4a52-9793-811a49b2020c\") " Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.223123 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-config-data-custom\") pod \"317fe3cf-1373-4cfe-9cd9-6d80050d4c3c\" (UID: \"317fe3cf-1373-4cfe-9cd9-6d80050d4c3c\") " Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.223156 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-config-data\") pod \"317fe3cf-1373-4cfe-9cd9-6d80050d4c3c\" (UID: \"317fe3cf-1373-4cfe-9cd9-6d80050d4c3c\") " Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.223248 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-combined-ca-bundle\") pod \"317fe3cf-1373-4cfe-9cd9-6d80050d4c3c\" (UID: \"317fe3cf-1373-4cfe-9cd9-6d80050d4c3c\") " Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.223328 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9njzq\" (UniqueName: \"kubernetes.io/projected/70eecfd3-212d-4a52-9793-811a49b2020c-kube-api-access-9njzq\") pod \"70eecfd3-212d-4a52-9793-811a49b2020c\" (UID: \"70eecfd3-212d-4a52-9793-811a49b2020c\") " Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.223861 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-config-data\") pod \"ceilometer-0\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.224046 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.224098 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.224179 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twz98\" (UniqueName: \"kubernetes.io/projected/a506e418-94c0-4a42-8493-6badc9bf052d-kube-api-access-twz98\") pod \"ceilometer-0\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.224243 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-scripts\") pod \"ceilometer-0\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.224309 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a506e418-94c0-4a42-8493-6badc9bf052d-log-httpd\") pod \"ceilometer-0\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.224344 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a506e418-94c0-4a42-8493-6badc9bf052d-run-httpd\") pod \"ceilometer-0\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.242952 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-kube-api-access-kw6v7" (OuterVolumeSpecName: "kube-api-access-kw6v7") pod "317fe3cf-1373-4cfe-9cd9-6d80050d4c3c" (UID: "317fe3cf-1373-4cfe-9cd9-6d80050d4c3c"). InnerVolumeSpecName "kube-api-access-kw6v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.256789 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "317fe3cf-1373-4cfe-9cd9-6d80050d4c3c" (UID: "317fe3cf-1373-4cfe-9cd9-6d80050d4c3c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.282687 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70eecfd3-212d-4a52-9793-811a49b2020c-kube-api-access-9njzq" (OuterVolumeSpecName: "kube-api-access-9njzq") pod "70eecfd3-212d-4a52-9793-811a49b2020c" (UID: "70eecfd3-212d-4a52-9793-811a49b2020c"). InnerVolumeSpecName "kube-api-access-9njzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.282826 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "317fe3cf-1373-4cfe-9cd9-6d80050d4c3c" (UID: "317fe3cf-1373-4cfe-9cd9-6d80050d4c3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.283500 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-config-data" (OuterVolumeSpecName: "config-data") pod "317fe3cf-1373-4cfe-9cd9-6d80050d4c3c" (UID: "317fe3cf-1373-4cfe-9cd9-6d80050d4c3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.283504 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70eecfd3-212d-4a52-9793-811a49b2020c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "70eecfd3-212d-4a52-9793-811a49b2020c" (UID: "70eecfd3-212d-4a52-9793-811a49b2020c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.283583 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70eecfd3-212d-4a52-9793-811a49b2020c-config-data" (OuterVolumeSpecName: "config-data") pod "70eecfd3-212d-4a52-9793-811a49b2020c" (UID: "70eecfd3-212d-4a52-9793-811a49b2020c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.283628 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70eecfd3-212d-4a52-9793-811a49b2020c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70eecfd3-212d-4a52-9793-811a49b2020c" (UID: "70eecfd3-212d-4a52-9793-811a49b2020c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.360806 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twz98\" (UniqueName: \"kubernetes.io/projected/a506e418-94c0-4a42-8493-6badc9bf052d-kube-api-access-twz98\") pod \"ceilometer-0\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.360902 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-scripts\") pod \"ceilometer-0\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.360973 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a506e418-94c0-4a42-8493-6badc9bf052d-log-httpd\") pod \"ceilometer-0\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.361030 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a506e418-94c0-4a42-8493-6badc9bf052d-run-httpd\") pod \"ceilometer-0\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.361512 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-config-data\") pod \"ceilometer-0\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.361729 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.361767 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.361852 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70eecfd3-212d-4a52-9793-811a49b2020c-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.361863 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw6v7\" (UniqueName: \"kubernetes.io/projected/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-kube-api-access-kw6v7\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.361874 4693 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70eecfd3-212d-4a52-9793-811a49b2020c-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.361882 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70eecfd3-212d-4a52-9793-811a49b2020c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.361890 4693 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.361899 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.361907 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.361918 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9njzq\" (UniqueName: \"kubernetes.io/projected/70eecfd3-212d-4a52-9793-811a49b2020c-kube-api-access-9njzq\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.366595 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.380552 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a506e418-94c0-4a42-8493-6badc9bf052d-run-httpd\") pod \"ceilometer-0\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.380802 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a506e418-94c0-4a42-8493-6badc9bf052d-log-httpd\") pod \"ceilometer-0\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.397575 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twz98\" (UniqueName: \"kubernetes.io/projected/a506e418-94c0-4a42-8493-6badc9bf052d-kube-api-access-twz98\") pod \"ceilometer-0\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.414115 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9b67354-eb2d-4ade-bc4a-096d9e0b9791" path="/var/lib/kubelet/pods/b9b67354-eb2d-4ade-bc4a-096d9e0b9791/volumes" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.419289 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.421960 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-config-data\") pod \"ceilometer-0\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.424997 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e73400cc-c2d1-4e09-94b2-38ad2d5a3058" path="/var/lib/kubelet/pods/e73400cc-c2d1-4e09-94b2-38ad2d5a3058/volumes" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.447068 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-scripts\") pod \"ceilometer-0\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.522611 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5f78f47478-pkflg"] Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.544343 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b930-account-create-update-l42mn"] Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.555217 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-d58f4676c-v22mp"] Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.572153 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7e7f-account-create-update-f9glz"] Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.595287 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-776c75b6d4-cbj4w"] Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.673181 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.796227 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-56c75966f4-fvrkb" event={"ID":"317fe3cf-1373-4cfe-9cd9-6d80050d4c3c","Type":"ContainerDied","Data":"e93441e03b7573202f0c472d015949b0232e993353b52c64620144e735be5862"} Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.801791 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-56c75966f4-fvrkb" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.831139 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6469c4dff9-vwxp5" event={"ID":"70eecfd3-212d-4a52-9793-811a49b2020c","Type":"ContainerDied","Data":"dc8657913c473714a48b5df581fb406ca4206d1990ea7857690eb45f3f7c22a3"} Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.831516 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6469c4dff9-vwxp5" Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.861897 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5f78f47478-pkflg" event={"ID":"e03ab816-746e-449a-8b01-e627b71362d3","Type":"ContainerStarted","Data":"8ec38c9e200425e9af4cdcabd4d091f5980a1964b375bc1c9d787d3de0c866e1"} Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.865584 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d58f4676c-v22mp" event={"ID":"6c84420b-2157-4931-86de-b1c55c4e7f0c","Type":"ContainerStarted","Data":"335c7a912d6a5960902bcafd9c0cf82131d76041cf69ffc6935e56d41dc1b20f"} Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.868963 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1b78-account-create-update-c757l" event={"ID":"1f3fc160-512e-4b69-8997-696c7b80c676","Type":"ContainerStarted","Data":"cbc23fbe3ed3f6772722aadc659477c42eff3c0608f56eb862b39fce002027c3"} Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.872098 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-v7mxb" event={"ID":"187aaad3-d0f1-4f94-8ee3-05b2df3da7f4","Type":"ContainerStarted","Data":"9583aead689f48f7e4ba54ff58c339653bc811a85cedf5ad72c9571f63d86846"} Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.879857 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jq8js" event={"ID":"dc0d7ecc-b112-46ba-aceb-681e37cb50a3","Type":"ContainerStarted","Data":"5b12979f5c91aeccdd668d7048f643b57760a5c8441038906a196d17f1406158"} Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.896368 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b930-account-create-update-l42mn" event={"ID":"8e828b00-d636-4aee-8eae-5bb405a342d7","Type":"ContainerStarted","Data":"204387d988813675e1c096b6931a06f244461613ea180d69a8c44a8b60512d0f"} Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.906138 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-75f68d46b9-kwdl9" event={"ID":"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a","Type":"ContainerStarted","Data":"47998b7876c521ed7b28abd595693e0241e7daf7ad830e39b8e2bdb96e123e8b"} Dec 12 16:13:55 crc kubenswrapper[4693]: I1212 16:13:55.925518 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-jq8js" podStartSLOduration=13.925490719 podStartE2EDuration="13.925490719s" podCreationTimestamp="2025-12-12 16:13:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:13:55.906889058 +0000 UTC m=+1663.075528669" watchObservedRunningTime="2025-12-12 16:13:55.925490719 +0000 UTC m=+1663.094130320" Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.023699 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6469c4dff9-vwxp5"] Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.049514 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6469c4dff9-vwxp5"] Dec 12 16:13:56 crc kubenswrapper[4693]: E1212 16:13:56.076919 4693 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70eecfd3_212d_4a52_9793_811a49b2020c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod317fe3cf_1373_4cfe_9cd9_6d80050d4c3c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod187aaad3_d0f1_4f94_8ee3_05b2df3da7f4.slice/crio-conmon-9583aead689f48f7e4ba54ff58c339653bc811a85cedf5ad72c9571f63d86846.scope\": RecentStats: unable to find data in memory cache]" Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.083067 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-5lhks"] Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.109080 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-56c75966f4-fvrkb"] Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.124810 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-56c75966f4-fvrkb"] Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.519888 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.922359 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a506e418-94c0-4a42-8493-6badc9bf052d","Type":"ContainerStarted","Data":"5a1052972e3cdbc3601b62fd2da8370b3ce896af3e67b919d0537bbc7fc0dd27"} Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.924942 4693 generic.go:334] "Generic (PLEG): container finished" podID="1f3fc160-512e-4b69-8997-696c7b80c676" containerID="5d18b0825ee2619dfc29b537dedc6dcd086a1c509f0e5fe57de67f4502904f9d" exitCode=0 Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.924993 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1b78-account-create-update-c757l" event={"ID":"1f3fc160-512e-4b69-8997-696c7b80c676","Type":"ContainerDied","Data":"5d18b0825ee2619dfc29b537dedc6dcd086a1c509f0e5fe57de67f4502904f9d"} Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.932417 4693 generic.go:334] "Generic (PLEG): container finished" podID="dc0d7ecc-b112-46ba-aceb-681e37cb50a3" containerID="ea68d47a0aed1c22031e57d1d7613edfba0f043dad64ba4505a75d644f0df2cb" exitCode=0 Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.932553 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jq8js" event={"ID":"dc0d7ecc-b112-46ba-aceb-681e37cb50a3","Type":"ContainerDied","Data":"ea68d47a0aed1c22031e57d1d7613edfba0f043dad64ba4505a75d644f0df2cb"} Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.935402 4693 generic.go:334] "Generic (PLEG): container finished" podID="8e828b00-d636-4aee-8eae-5bb405a342d7" containerID="696e8a8d6374118f885fda0c957bb85089f29c644bb9649b18ab2738fee3b640" exitCode=0 Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.935497 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b930-account-create-update-l42mn" event={"ID":"8e828b00-d636-4aee-8eae-5bb405a342d7","Type":"ContainerDied","Data":"696e8a8d6374118f885fda0c957bb85089f29c644bb9649b18ab2738fee3b640"} Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.940186 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7e7f-account-create-update-f9glz" event={"ID":"b08fa7b7-2bf5-4695-be3c-4a455172a896","Type":"ContainerStarted","Data":"7df58e7abad652dfa43b77478227c0fcad50dab989ea9ba8265662987455f1db"} Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.940244 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7e7f-account-create-update-f9glz" event={"ID":"b08fa7b7-2bf5-4695-be3c-4a455172a896","Type":"ContainerStarted","Data":"a92d8a90c2b8efa8c02b0927794fb5c14de5ab9d64ba36c2a9c70d1d0568bfeb"} Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.945303 4693 generic.go:334] "Generic (PLEG): container finished" podID="187aaad3-d0f1-4f94-8ee3-05b2df3da7f4" containerID="9583aead689f48f7e4ba54ff58c339653bc811a85cedf5ad72c9571f63d86846" exitCode=0 Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.945377 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-v7mxb" event={"ID":"187aaad3-d0f1-4f94-8ee3-05b2df3da7f4","Type":"ContainerDied","Data":"9583aead689f48f7e4ba54ff58c339653bc811a85cedf5ad72c9571f63d86846"} Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.950854 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-5lhks" event={"ID":"c9881347-9d47-4c92-93ec-aeee80ff784d","Type":"ContainerStarted","Data":"8bec8dcd017a048656c8f84dcb72cf3eff18e1d7a1fcb216aad90dcf3d1ce5fd"} Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.950895 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-5lhks" event={"ID":"c9881347-9d47-4c92-93ec-aeee80ff784d","Type":"ContainerStarted","Data":"6f4962844cbbabaecd15ff29ea610e668a66b4819a29628cad2077cadb990dd5"} Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.959131 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-776c75b6d4-cbj4w" event={"ID":"77b98312-4447-4e00-b457-c724c0b623d3","Type":"ContainerStarted","Data":"62edd93be233903199368d4a2eb49cd9c5da2bd76e170d84c9e0717ee947b966"} Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.959187 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-776c75b6d4-cbj4w" event={"ID":"77b98312-4447-4e00-b457-c724c0b623d3","Type":"ContainerStarted","Data":"041ef6247ea3d0a856ae424e427936ef1cfc16598d8e981f26ca4e5b527434db"} Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.960443 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-776c75b6d4-cbj4w" Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.971492 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5f78f47478-pkflg" event={"ID":"e03ab816-746e-449a-8b01-e627b71362d3","Type":"ContainerStarted","Data":"405cf1f6ed52034b6ce730a74929b3c77d7bd2886c293cb0556bb9380199d768"} Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.972165 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.980923 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d58f4676c-v22mp" event={"ID":"6c84420b-2157-4931-86de-b1c55c4e7f0c","Type":"ContainerStarted","Data":"0c1edc68e88f13f2a48d29c52c18a9304a2795ae34793965bb6c877581e45dfe"} Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.981575 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-d58f4676c-v22mp" Dec 12 16:13:56 crc kubenswrapper[4693]: I1212 16:13:56.984486 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-75f68d46b9-kwdl9" Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.008062 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-7e7f-account-create-update-f9glz" podStartSLOduration=15.008042879 podStartE2EDuration="15.008042879s" podCreationTimestamp="2025-12-12 16:13:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:13:56.995026928 +0000 UTC m=+1664.163666519" watchObservedRunningTime="2025-12-12 16:13:57.008042879 +0000 UTC m=+1664.176682480" Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.087831 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5f78f47478-pkflg" podStartSLOduration=9.417621916 podStartE2EDuration="10.087800494s" podCreationTimestamp="2025-12-12 16:13:47 +0000 UTC" firstStartedPulling="2025-12-12 16:13:55.648558329 +0000 UTC m=+1662.817197930" lastFinishedPulling="2025-12-12 16:13:56.318736907 +0000 UTC m=+1663.487376508" observedRunningTime="2025-12-12 16:13:57.022043855 +0000 UTC m=+1664.190683456" watchObservedRunningTime="2025-12-12 16:13:57.087800494 +0000 UTC m=+1664.256440105" Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.103093 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-776c75b6d4-cbj4w" podStartSLOduration=13.103071725 podStartE2EDuration="13.103071725s" podCreationTimestamp="2025-12-12 16:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:13:57.047311395 +0000 UTC m=+1664.215950996" watchObservedRunningTime="2025-12-12 16:13:57.103071725 +0000 UTC m=+1664.271711326" Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.105892 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-5lhks" podStartSLOduration=15.10587912 podStartE2EDuration="15.10587912s" podCreationTimestamp="2025-12-12 16:13:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:13:57.078900035 +0000 UTC m=+1664.247539646" watchObservedRunningTime="2025-12-12 16:13:57.10587912 +0000 UTC m=+1664.274518731" Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.168528 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-75f68d46b9-kwdl9" podStartSLOduration=11.634900171 podStartE2EDuration="13.168503395s" podCreationTimestamp="2025-12-12 16:13:44 +0000 UTC" firstStartedPulling="2025-12-12 16:13:55.051414116 +0000 UTC m=+1662.220053717" lastFinishedPulling="2025-12-12 16:13:56.58501734 +0000 UTC m=+1663.753656941" observedRunningTime="2025-12-12 16:13:57.110639638 +0000 UTC m=+1664.279279249" watchObservedRunningTime="2025-12-12 16:13:57.168503395 +0000 UTC m=+1664.337142986" Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.191439 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-d58f4676c-v22mp" podStartSLOduration=11.508978944 podStartE2EDuration="12.191207176s" podCreationTimestamp="2025-12-12 16:13:45 +0000 UTC" firstStartedPulling="2025-12-12 16:13:55.672343199 +0000 UTC m=+1662.840982790" lastFinishedPulling="2025-12-12 16:13:56.354571421 +0000 UTC m=+1663.523211022" observedRunningTime="2025-12-12 16:13:57.130544664 +0000 UTC m=+1664.299184285" watchObservedRunningTime="2025-12-12 16:13:57.191207176 +0000 UTC m=+1664.359846777" Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.410670 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="317fe3cf-1373-4cfe-9cd9-6d80050d4c3c" path="/var/lib/kubelet/pods/317fe3cf-1373-4cfe-9cd9-6d80050d4c3c/volumes" Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.411447 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70eecfd3-212d-4a52-9793-811a49b2020c" path="/var/lib/kubelet/pods/70eecfd3-212d-4a52-9793-811a49b2020c/volumes" Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.614268 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-v7mxb" Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.771015 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/187aaad3-d0f1-4f94-8ee3-05b2df3da7f4-operator-scripts\") pod \"187aaad3-d0f1-4f94-8ee3-05b2df3da7f4\" (UID: \"187aaad3-d0f1-4f94-8ee3-05b2df3da7f4\") " Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.771115 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-954dw\" (UniqueName: \"kubernetes.io/projected/187aaad3-d0f1-4f94-8ee3-05b2df3da7f4-kube-api-access-954dw\") pod \"187aaad3-d0f1-4f94-8ee3-05b2df3da7f4\" (UID: \"187aaad3-d0f1-4f94-8ee3-05b2df3da7f4\") " Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.771872 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/187aaad3-d0f1-4f94-8ee3-05b2df3da7f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "187aaad3-d0f1-4f94-8ee3-05b2df3da7f4" (UID: "187aaad3-d0f1-4f94-8ee3-05b2df3da7f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.772203 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/187aaad3-d0f1-4f94-8ee3-05b2df3da7f4-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.777939 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/187aaad3-d0f1-4f94-8ee3-05b2df3da7f4-kube-api-access-954dw" (OuterVolumeSpecName: "kube-api-access-954dw") pod "187aaad3-d0f1-4f94-8ee3-05b2df3da7f4" (UID: "187aaad3-d0f1-4f94-8ee3-05b2df3da7f4"). InnerVolumeSpecName "kube-api-access-954dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.874445 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-954dw\" (UniqueName: \"kubernetes.io/projected/187aaad3-d0f1-4f94-8ee3-05b2df3da7f4-kube-api-access-954dw\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.993760 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-798ccfcf74-j5zqf" event={"ID":"d39e7363-35e4-4586-a52a-ad6b28e98eb7","Type":"ContainerStarted","Data":"9d02bd23400e0104e952e43ebcbc1423381106072949b44d581b5a49ac891e7f"} Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.993820 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.996107 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a506e418-94c0-4a42-8493-6badc9bf052d","Type":"ContainerStarted","Data":"e1aeabaf6319e4dfebd7ac659467b091356fd745f71e73487af8bfb3ccb5d02d"} Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.997510 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-v7mxb" event={"ID":"187aaad3-d0f1-4f94-8ee3-05b2df3da7f4","Type":"ContainerDied","Data":"1933af74b47f375aceee4d1488a93f803646b617de123a597b361bd54572a6d0"} Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.997544 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1933af74b47f375aceee4d1488a93f803646b617de123a597b361bd54572a6d0" Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.997696 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-v7mxb" Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.998561 4693 generic.go:334] "Generic (PLEG): container finished" podID="c9881347-9d47-4c92-93ec-aeee80ff784d" containerID="8bec8dcd017a048656c8f84dcb72cf3eff18e1d7a1fcb216aad90dcf3d1ce5fd" exitCode=0 Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.998601 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-5lhks" event={"ID":"c9881347-9d47-4c92-93ec-aeee80ff784d","Type":"ContainerDied","Data":"8bec8dcd017a048656c8f84dcb72cf3eff18e1d7a1fcb216aad90dcf3d1ce5fd"} Dec 12 16:13:57 crc kubenswrapper[4693]: I1212 16:13:57.999911 4693 generic.go:334] "Generic (PLEG): container finished" podID="b08fa7b7-2bf5-4695-be3c-4a455172a896" containerID="7df58e7abad652dfa43b77478227c0fcad50dab989ea9ba8265662987455f1db" exitCode=0 Dec 12 16:13:58 crc kubenswrapper[4693]: I1212 16:13:57.999939 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7e7f-account-create-update-f9glz" event={"ID":"b08fa7b7-2bf5-4695-be3c-4a455172a896","Type":"ContainerDied","Data":"7df58e7abad652dfa43b77478227c0fcad50dab989ea9ba8265662987455f1db"} Dec 12 16:13:58 crc kubenswrapper[4693]: I1212 16:13:58.001338 4693 generic.go:334] "Generic (PLEG): container finished" podID="6c84420b-2157-4931-86de-b1c55c4e7f0c" containerID="0c1edc68e88f13f2a48d29c52c18a9304a2795ae34793965bb6c877581e45dfe" exitCode=1 Dec 12 16:13:58 crc kubenswrapper[4693]: I1212 16:13:58.001418 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d58f4676c-v22mp" event={"ID":"6c84420b-2157-4931-86de-b1c55c4e7f0c","Type":"ContainerDied","Data":"0c1edc68e88f13f2a48d29c52c18a9304a2795ae34793965bb6c877581e45dfe"} Dec 12 16:13:58 crc kubenswrapper[4693]: I1212 16:13:58.002303 4693 scope.go:117] "RemoveContainer" containerID="0c1edc68e88f13f2a48d29c52c18a9304a2795ae34793965bb6c877581e45dfe" Dec 12 16:13:58 crc kubenswrapper[4693]: I1212 16:13:58.002547 4693 generic.go:334] "Generic (PLEG): container finished" podID="77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a" containerID="4520dfa63a3f419c9e01fcd62fa465f965096d8b6da3db286ff9822d859a16a7" exitCode=1 Dec 12 16:13:58 crc kubenswrapper[4693]: I1212 16:13:58.002847 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-75f68d46b9-kwdl9" event={"ID":"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a","Type":"ContainerDied","Data":"4520dfa63a3f419c9e01fcd62fa465f965096d8b6da3db286ff9822d859a16a7"} Dec 12 16:13:58 crc kubenswrapper[4693]: I1212 16:13:58.003231 4693 scope.go:117] "RemoveContainer" containerID="4520dfa63a3f419c9e01fcd62fa465f965096d8b6da3db286ff9822d859a16a7" Dec 12 16:13:58 crc kubenswrapper[4693]: I1212 16:13:58.027563 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-798ccfcf74-j5zqf" podStartSLOduration=9.387026533 podStartE2EDuration="11.027536033s" podCreationTimestamp="2025-12-12 16:13:47 +0000 UTC" firstStartedPulling="2025-12-12 16:13:54.58612643 +0000 UTC m=+1661.754766031" lastFinishedPulling="2025-12-12 16:13:56.22663593 +0000 UTC m=+1663.395275531" observedRunningTime="2025-12-12 16:13:58.015244462 +0000 UTC m=+1665.183884083" watchObservedRunningTime="2025-12-12 16:13:58.027536033 +0000 UTC m=+1665.196175634" Dec 12 16:13:58 crc kubenswrapper[4693]: I1212 16:13:58.535856 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-85fcc456ff-kvgm6" Dec 12 16:13:58 crc kubenswrapper[4693]: I1212 16:13:58.744557 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jq8js" Dec 12 16:13:58 crc kubenswrapper[4693]: I1212 16:13:58.801463 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc0d7ecc-b112-46ba-aceb-681e37cb50a3-operator-scripts\") pod \"dc0d7ecc-b112-46ba-aceb-681e37cb50a3\" (UID: \"dc0d7ecc-b112-46ba-aceb-681e37cb50a3\") " Dec 12 16:13:58 crc kubenswrapper[4693]: I1212 16:13:58.801574 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjcth\" (UniqueName: \"kubernetes.io/projected/dc0d7ecc-b112-46ba-aceb-681e37cb50a3-kube-api-access-xjcth\") pod \"dc0d7ecc-b112-46ba-aceb-681e37cb50a3\" (UID: \"dc0d7ecc-b112-46ba-aceb-681e37cb50a3\") " Dec 12 16:13:58 crc kubenswrapper[4693]: I1212 16:13:58.802136 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc0d7ecc-b112-46ba-aceb-681e37cb50a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc0d7ecc-b112-46ba-aceb-681e37cb50a3" (UID: "dc0d7ecc-b112-46ba-aceb-681e37cb50a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:13:58 crc kubenswrapper[4693]: I1212 16:13:58.816446 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc0d7ecc-b112-46ba-aceb-681e37cb50a3-kube-api-access-xjcth" (OuterVolumeSpecName: "kube-api-access-xjcth") pod "dc0d7ecc-b112-46ba-aceb-681e37cb50a3" (UID: "dc0d7ecc-b112-46ba-aceb-681e37cb50a3"). InnerVolumeSpecName "kube-api-access-xjcth". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:13:58 crc kubenswrapper[4693]: I1212 16:13:58.824767 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1b78-account-create-update-c757l" Dec 12 16:13:58 crc kubenswrapper[4693]: I1212 16:13:58.904518 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b930-account-create-update-l42mn" Dec 12 16:13:58 crc kubenswrapper[4693]: I1212 16:13:58.905581 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f3fc160-512e-4b69-8997-696c7b80c676-operator-scripts\") pod \"1f3fc160-512e-4b69-8997-696c7b80c676\" (UID: \"1f3fc160-512e-4b69-8997-696c7b80c676\") " Dec 12 16:13:58 crc kubenswrapper[4693]: I1212 16:13:58.905815 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b228d\" (UniqueName: \"kubernetes.io/projected/1f3fc160-512e-4b69-8997-696c7b80c676-kube-api-access-b228d\") pod \"1f3fc160-512e-4b69-8997-696c7b80c676\" (UID: \"1f3fc160-512e-4b69-8997-696c7b80c676\") " Dec 12 16:13:58 crc kubenswrapper[4693]: I1212 16:13:58.906509 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc0d7ecc-b112-46ba-aceb-681e37cb50a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:58 crc kubenswrapper[4693]: I1212 16:13:58.906532 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjcth\" (UniqueName: \"kubernetes.io/projected/dc0d7ecc-b112-46ba-aceb-681e37cb50a3-kube-api-access-xjcth\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:58 crc kubenswrapper[4693]: I1212 16:13:58.906729 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f3fc160-512e-4b69-8997-696c7b80c676-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f3fc160-512e-4b69-8997-696c7b80c676" (UID: "1f3fc160-512e-4b69-8997-696c7b80c676"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:13:58 crc kubenswrapper[4693]: I1212 16:13:58.910620 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f3fc160-512e-4b69-8997-696c7b80c676-kube-api-access-b228d" (OuterVolumeSpecName: "kube-api-access-b228d") pod "1f3fc160-512e-4b69-8997-696c7b80c676" (UID: "1f3fc160-512e-4b69-8997-696c7b80c676"). InnerVolumeSpecName "kube-api-access-b228d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.008186 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e828b00-d636-4aee-8eae-5bb405a342d7-operator-scripts\") pod \"8e828b00-d636-4aee-8eae-5bb405a342d7\" (UID: \"8e828b00-d636-4aee-8eae-5bb405a342d7\") " Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.008660 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b85fb\" (UniqueName: \"kubernetes.io/projected/8e828b00-d636-4aee-8eae-5bb405a342d7-kube-api-access-b85fb\") pod \"8e828b00-d636-4aee-8eae-5bb405a342d7\" (UID: \"8e828b00-d636-4aee-8eae-5bb405a342d7\") " Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.008759 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e828b00-d636-4aee-8eae-5bb405a342d7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e828b00-d636-4aee-8eae-5bb405a342d7" (UID: "8e828b00-d636-4aee-8eae-5bb405a342d7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.010014 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f3fc160-512e-4b69-8997-696c7b80c676-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.010038 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e828b00-d636-4aee-8eae-5bb405a342d7-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.010053 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b228d\" (UniqueName: \"kubernetes.io/projected/1f3fc160-512e-4b69-8997-696c7b80c676-kube-api-access-b228d\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.012223 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e828b00-d636-4aee-8eae-5bb405a342d7-kube-api-access-b85fb" (OuterVolumeSpecName: "kube-api-access-b85fb") pod "8e828b00-d636-4aee-8eae-5bb405a342d7" (UID: "8e828b00-d636-4aee-8eae-5bb405a342d7"). InnerVolumeSpecName "kube-api-access-b85fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.019210 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-75f68d46b9-kwdl9" event={"ID":"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a","Type":"ContainerStarted","Data":"998bb426726973b9aa11bb36c0a273f2e860a00db6800e9bfb59edd73a3302a1"} Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.020067 4693 scope.go:117] "RemoveContainer" containerID="998bb426726973b9aa11bb36c0a273f2e860a00db6800e9bfb59edd73a3302a1" Dec 12 16:13:59 crc kubenswrapper[4693]: E1212 16:13:59.020500 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-75f68d46b9-kwdl9_openstack(77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a)\"" pod="openstack/heat-api-75f68d46b9-kwdl9" podUID="77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.020953 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a506e418-94c0-4a42-8493-6badc9bf052d","Type":"ContainerStarted","Data":"0c5519487ed40fbd0cccf841ed1b7f45365dd6fc2141ce16d9d4a4f5e93319b8"} Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.023644 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1b78-account-create-update-c757l" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.023960 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1b78-account-create-update-c757l" event={"ID":"1f3fc160-512e-4b69-8997-696c7b80c676","Type":"ContainerDied","Data":"cbc23fbe3ed3f6772722aadc659477c42eff3c0608f56eb862b39fce002027c3"} Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.024006 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbc23fbe3ed3f6772722aadc659477c42eff3c0608f56eb862b39fce002027c3" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.028366 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jq8js" event={"ID":"dc0d7ecc-b112-46ba-aceb-681e37cb50a3","Type":"ContainerDied","Data":"5b12979f5c91aeccdd668d7048f643b57760a5c8441038906a196d17f1406158"} Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.028674 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b12979f5c91aeccdd668d7048f643b57760a5c8441038906a196d17f1406158" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.028597 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jq8js" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.030213 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b930-account-create-update-l42mn" event={"ID":"8e828b00-d636-4aee-8eae-5bb405a342d7","Type":"ContainerDied","Data":"204387d988813675e1c096b6931a06f244461613ea180d69a8c44a8b60512d0f"} Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.030237 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="204387d988813675e1c096b6931a06f244461613ea180d69a8c44a8b60512d0f" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.030290 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b930-account-create-update-l42mn" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.045264 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d58f4676c-v22mp" event={"ID":"6c84420b-2157-4931-86de-b1c55c4e7f0c","Type":"ContainerStarted","Data":"bdcfdb491f018b00a2a3a3a9b65a0fd70049cb3a2183e8991e886bff47c98d18"} Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.045615 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-d58f4676c-v22mp" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.113645 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b85fb\" (UniqueName: \"kubernetes.io/projected/8e828b00-d636-4aee-8eae-5bb405a342d7-kube-api-access-b85fb\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.624211 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-5lhks" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.630355 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7e7f-account-create-update-f9glz" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.750912 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08fa7b7-2bf5-4695-be3c-4a455172a896-operator-scripts\") pod \"b08fa7b7-2bf5-4695-be3c-4a455172a896\" (UID: \"b08fa7b7-2bf5-4695-be3c-4a455172a896\") " Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.751881 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgj67\" (UniqueName: \"kubernetes.io/projected/c9881347-9d47-4c92-93ec-aeee80ff784d-kube-api-access-fgj67\") pod \"c9881347-9d47-4c92-93ec-aeee80ff784d\" (UID: \"c9881347-9d47-4c92-93ec-aeee80ff784d\") " Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.752035 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9881347-9d47-4c92-93ec-aeee80ff784d-operator-scripts\") pod \"c9881347-9d47-4c92-93ec-aeee80ff784d\" (UID: \"c9881347-9d47-4c92-93ec-aeee80ff784d\") " Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.752258 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6zpq\" (UniqueName: \"kubernetes.io/projected/b08fa7b7-2bf5-4695-be3c-4a455172a896-kube-api-access-q6zpq\") pod \"b08fa7b7-2bf5-4695-be3c-4a455172a896\" (UID: \"b08fa7b7-2bf5-4695-be3c-4a455172a896\") " Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.753512 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b08fa7b7-2bf5-4695-be3c-4a455172a896-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b08fa7b7-2bf5-4695-be3c-4a455172a896" (UID: "b08fa7b7-2bf5-4695-be3c-4a455172a896"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.758857 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9881347-9d47-4c92-93ec-aeee80ff784d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9881347-9d47-4c92-93ec-aeee80ff784d" (UID: "c9881347-9d47-4c92-93ec-aeee80ff784d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.774410 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9881347-9d47-4c92-93ec-aeee80ff784d-kube-api-access-fgj67" (OuterVolumeSpecName: "kube-api-access-fgj67") pod "c9881347-9d47-4c92-93ec-aeee80ff784d" (UID: "c9881347-9d47-4c92-93ec-aeee80ff784d"). InnerVolumeSpecName "kube-api-access-fgj67". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.774567 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b08fa7b7-2bf5-4695-be3c-4a455172a896-kube-api-access-q6zpq" (OuterVolumeSpecName: "kube-api-access-q6zpq") pod "b08fa7b7-2bf5-4695-be3c-4a455172a896" (UID: "b08fa7b7-2bf5-4695-be3c-4a455172a896"). InnerVolumeSpecName "kube-api-access-q6zpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.855416 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08fa7b7-2bf5-4695-be3c-4a455172a896-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.855675 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgj67\" (UniqueName: \"kubernetes.io/projected/c9881347-9d47-4c92-93ec-aeee80ff784d-kube-api-access-fgj67\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.855747 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9881347-9d47-4c92-93ec-aeee80ff784d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:13:59 crc kubenswrapper[4693]: I1212 16:13:59.855814 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6zpq\" (UniqueName: \"kubernetes.io/projected/b08fa7b7-2bf5-4695-be3c-4a455172a896-kube-api-access-q6zpq\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:00 crc kubenswrapper[4693]: I1212 16:14:00.063585 4693 generic.go:334] "Generic (PLEG): container finished" podID="77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a" containerID="998bb426726973b9aa11bb36c0a273f2e860a00db6800e9bfb59edd73a3302a1" exitCode=1 Dec 12 16:14:00 crc kubenswrapper[4693]: I1212 16:14:00.063750 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-75f68d46b9-kwdl9" event={"ID":"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a","Type":"ContainerDied","Data":"998bb426726973b9aa11bb36c0a273f2e860a00db6800e9bfb59edd73a3302a1"} Dec 12 16:14:00 crc kubenswrapper[4693]: I1212 16:14:00.064304 4693 scope.go:117] "RemoveContainer" containerID="998bb426726973b9aa11bb36c0a273f2e860a00db6800e9bfb59edd73a3302a1" Dec 12 16:14:00 crc kubenswrapper[4693]: I1212 16:14:00.064645 4693 scope.go:117] "RemoveContainer" containerID="4520dfa63a3f419c9e01fcd62fa465f965096d8b6da3db286ff9822d859a16a7" Dec 12 16:14:00 crc kubenswrapper[4693]: E1212 16:14:00.065723 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-75f68d46b9-kwdl9_openstack(77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a)\"" pod="openstack/heat-api-75f68d46b9-kwdl9" podUID="77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a" Dec 12 16:14:00 crc kubenswrapper[4693]: I1212 16:14:00.067338 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-5lhks" event={"ID":"c9881347-9d47-4c92-93ec-aeee80ff784d","Type":"ContainerDied","Data":"6f4962844cbbabaecd15ff29ea610e668a66b4819a29628cad2077cadb990dd5"} Dec 12 16:14:00 crc kubenswrapper[4693]: I1212 16:14:00.067417 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-5lhks" Dec 12 16:14:00 crc kubenswrapper[4693]: I1212 16:14:00.067431 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f4962844cbbabaecd15ff29ea610e668a66b4819a29628cad2077cadb990dd5" Dec 12 16:14:00 crc kubenswrapper[4693]: I1212 16:14:00.070732 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7e7f-account-create-update-f9glz" event={"ID":"b08fa7b7-2bf5-4695-be3c-4a455172a896","Type":"ContainerDied","Data":"a92d8a90c2b8efa8c02b0927794fb5c14de5ab9d64ba36c2a9c70d1d0568bfeb"} Dec 12 16:14:00 crc kubenswrapper[4693]: I1212 16:14:00.071450 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a92d8a90c2b8efa8c02b0927794fb5c14de5ab9d64ba36c2a9c70d1d0568bfeb" Dec 12 16:14:00 crc kubenswrapper[4693]: I1212 16:14:00.070769 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7e7f-account-create-update-f9glz" Dec 12 16:14:00 crc kubenswrapper[4693]: I1212 16:14:00.072857 4693 generic.go:334] "Generic (PLEG): container finished" podID="6c84420b-2157-4931-86de-b1c55c4e7f0c" containerID="bdcfdb491f018b00a2a3a3a9b65a0fd70049cb3a2183e8991e886bff47c98d18" exitCode=1 Dec 12 16:14:00 crc kubenswrapper[4693]: I1212 16:14:00.072954 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d58f4676c-v22mp" event={"ID":"6c84420b-2157-4931-86de-b1c55c4e7f0c","Type":"ContainerDied","Data":"bdcfdb491f018b00a2a3a3a9b65a0fd70049cb3a2183e8991e886bff47c98d18"} Dec 12 16:14:00 crc kubenswrapper[4693]: I1212 16:14:00.073629 4693 scope.go:117] "RemoveContainer" containerID="bdcfdb491f018b00a2a3a3a9b65a0fd70049cb3a2183e8991e886bff47c98d18" Dec 12 16:14:00 crc kubenswrapper[4693]: E1212 16:14:00.074029 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-d58f4676c-v22mp_openstack(6c84420b-2157-4931-86de-b1c55c4e7f0c)\"" pod="openstack/heat-cfnapi-d58f4676c-v22mp" podUID="6c84420b-2157-4931-86de-b1c55c4e7f0c" Dec 12 16:14:00 crc kubenswrapper[4693]: I1212 16:14:00.161100 4693 scope.go:117] "RemoveContainer" containerID="0c1edc68e88f13f2a48d29c52c18a9304a2795ae34793965bb6c877581e45dfe" Dec 12 16:14:00 crc kubenswrapper[4693]: I1212 16:14:00.445631 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-75f68d46b9-kwdl9" Dec 12 16:14:00 crc kubenswrapper[4693]: I1212 16:14:00.445681 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-75f68d46b9-kwdl9" Dec 12 16:14:00 crc kubenswrapper[4693]: I1212 16:14:00.528010 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-d58f4676c-v22mp" Dec 12 16:14:01 crc kubenswrapper[4693]: I1212 16:14:01.086628 4693 scope.go:117] "RemoveContainer" containerID="bdcfdb491f018b00a2a3a3a9b65a0fd70049cb3a2183e8991e886bff47c98d18" Dec 12 16:14:01 crc kubenswrapper[4693]: E1212 16:14:01.087082 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-d58f4676c-v22mp_openstack(6c84420b-2157-4931-86de-b1c55c4e7f0c)\"" pod="openstack/heat-cfnapi-d58f4676c-v22mp" podUID="6c84420b-2157-4931-86de-b1c55c4e7f0c" Dec 12 16:14:01 crc kubenswrapper[4693]: I1212 16:14:01.090928 4693 scope.go:117] "RemoveContainer" containerID="998bb426726973b9aa11bb36c0a273f2e860a00db6800e9bfb59edd73a3302a1" Dec 12 16:14:01 crc kubenswrapper[4693]: E1212 16:14:01.091420 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-75f68d46b9-kwdl9_openstack(77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a)\"" pod="openstack/heat-api-75f68d46b9-kwdl9" podUID="77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a" Dec 12 16:14:01 crc kubenswrapper[4693]: I1212 16:14:01.094128 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a506e418-94c0-4a42-8493-6badc9bf052d","Type":"ContainerStarted","Data":"4f63f864b3a3fbd459da9a134c1f86dc68339ec379d4956b4a37e76fcfd19922"} Dec 12 16:14:02 crc kubenswrapper[4693]: I1212 16:14:02.112710 4693 scope.go:117] "RemoveContainer" containerID="bdcfdb491f018b00a2a3a3a9b65a0fd70049cb3a2183e8991e886bff47c98d18" Dec 12 16:14:02 crc kubenswrapper[4693]: E1212 16:14:02.113935 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-d58f4676c-v22mp_openstack(6c84420b-2157-4931-86de-b1c55c4e7f0c)\"" pod="openstack/heat-cfnapi-d58f4676c-v22mp" podUID="6c84420b-2157-4931-86de-b1c55c4e7f0c" Dec 12 16:14:02 crc kubenswrapper[4693]: I1212 16:14:02.114845 4693 scope.go:117] "RemoveContainer" containerID="998bb426726973b9aa11bb36c0a273f2e860a00db6800e9bfb59edd73a3302a1" Dec 12 16:14:02 crc kubenswrapper[4693]: E1212 16:14:02.115568 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-75f68d46b9-kwdl9_openstack(77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a)\"" pod="openstack/heat-api-75f68d46b9-kwdl9" podUID="77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a" Dec 12 16:14:02 crc kubenswrapper[4693]: I1212 16:14:02.768937 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.402943 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dgtln"] Dec 12 16:14:03 crc kubenswrapper[4693]: E1212 16:14:03.403666 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3fc160-512e-4b69-8997-696c7b80c676" containerName="mariadb-account-create-update" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.403682 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3fc160-512e-4b69-8997-696c7b80c676" containerName="mariadb-account-create-update" Dec 12 16:14:03 crc kubenswrapper[4693]: E1212 16:14:03.403706 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="187aaad3-d0f1-4f94-8ee3-05b2df3da7f4" containerName="mariadb-database-create" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.403713 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="187aaad3-d0f1-4f94-8ee3-05b2df3da7f4" containerName="mariadb-database-create" Dec 12 16:14:03 crc kubenswrapper[4693]: E1212 16:14:03.403735 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e828b00-d636-4aee-8eae-5bb405a342d7" containerName="mariadb-account-create-update" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.403740 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e828b00-d636-4aee-8eae-5bb405a342d7" containerName="mariadb-account-create-update" Dec 12 16:14:03 crc kubenswrapper[4693]: E1212 16:14:03.403750 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9881347-9d47-4c92-93ec-aeee80ff784d" containerName="mariadb-database-create" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.403756 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9881347-9d47-4c92-93ec-aeee80ff784d" containerName="mariadb-database-create" Dec 12 16:14:03 crc kubenswrapper[4693]: E1212 16:14:03.403764 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b08fa7b7-2bf5-4695-be3c-4a455172a896" containerName="mariadb-account-create-update" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.403769 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="b08fa7b7-2bf5-4695-be3c-4a455172a896" containerName="mariadb-account-create-update" Dec 12 16:14:03 crc kubenswrapper[4693]: E1212 16:14:03.403795 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc0d7ecc-b112-46ba-aceb-681e37cb50a3" containerName="mariadb-database-create" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.403801 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc0d7ecc-b112-46ba-aceb-681e37cb50a3" containerName="mariadb-database-create" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.404052 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc0d7ecc-b112-46ba-aceb-681e37cb50a3" containerName="mariadb-database-create" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.404076 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9881347-9d47-4c92-93ec-aeee80ff784d" containerName="mariadb-database-create" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.404091 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f3fc160-512e-4b69-8997-696c7b80c676" containerName="mariadb-account-create-update" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.404108 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="187aaad3-d0f1-4f94-8ee3-05b2df3da7f4" containerName="mariadb-database-create" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.404118 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e828b00-d636-4aee-8eae-5bb405a342d7" containerName="mariadb-account-create-update" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.404129 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="b08fa7b7-2bf5-4695-be3c-4a455172a896" containerName="mariadb-account-create-update" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.405021 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dgtln" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.413884 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5wxvb" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.414134 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.414331 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.420856 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dgtln"] Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.470987 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55ff1574-a937-497e-9e7a-0589ee9732bf-scripts\") pod \"nova-cell0-conductor-db-sync-dgtln\" (UID: \"55ff1574-a937-497e-9e7a-0589ee9732bf\") " pod="openstack/nova-cell0-conductor-db-sync-dgtln" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.471162 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55ff1574-a937-497e-9e7a-0589ee9732bf-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dgtln\" (UID: \"55ff1574-a937-497e-9e7a-0589ee9732bf\") " pod="openstack/nova-cell0-conductor-db-sync-dgtln" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.472850 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55ff1574-a937-497e-9e7a-0589ee9732bf-config-data\") pod \"nova-cell0-conductor-db-sync-dgtln\" (UID: \"55ff1574-a937-497e-9e7a-0589ee9732bf\") " pod="openstack/nova-cell0-conductor-db-sync-dgtln" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.473031 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwzc5\" (UniqueName: \"kubernetes.io/projected/55ff1574-a937-497e-9e7a-0589ee9732bf-kube-api-access-nwzc5\") pod \"nova-cell0-conductor-db-sync-dgtln\" (UID: \"55ff1574-a937-497e-9e7a-0589ee9732bf\") " pod="openstack/nova-cell0-conductor-db-sync-dgtln" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.577134 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55ff1574-a937-497e-9e7a-0589ee9732bf-scripts\") pod \"nova-cell0-conductor-db-sync-dgtln\" (UID: \"55ff1574-a937-497e-9e7a-0589ee9732bf\") " pod="openstack/nova-cell0-conductor-db-sync-dgtln" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.577197 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55ff1574-a937-497e-9e7a-0589ee9732bf-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dgtln\" (UID: \"55ff1574-a937-497e-9e7a-0589ee9732bf\") " pod="openstack/nova-cell0-conductor-db-sync-dgtln" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.577312 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55ff1574-a937-497e-9e7a-0589ee9732bf-config-data\") pod \"nova-cell0-conductor-db-sync-dgtln\" (UID: \"55ff1574-a937-497e-9e7a-0589ee9732bf\") " pod="openstack/nova-cell0-conductor-db-sync-dgtln" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.577349 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwzc5\" (UniqueName: \"kubernetes.io/projected/55ff1574-a937-497e-9e7a-0589ee9732bf-kube-api-access-nwzc5\") pod \"nova-cell0-conductor-db-sync-dgtln\" (UID: \"55ff1574-a937-497e-9e7a-0589ee9732bf\") " pod="openstack/nova-cell0-conductor-db-sync-dgtln" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.584799 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55ff1574-a937-497e-9e7a-0589ee9732bf-scripts\") pod \"nova-cell0-conductor-db-sync-dgtln\" (UID: \"55ff1574-a937-497e-9e7a-0589ee9732bf\") " pod="openstack/nova-cell0-conductor-db-sync-dgtln" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.585981 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55ff1574-a937-497e-9e7a-0589ee9732bf-config-data\") pod \"nova-cell0-conductor-db-sync-dgtln\" (UID: \"55ff1574-a937-497e-9e7a-0589ee9732bf\") " pod="openstack/nova-cell0-conductor-db-sync-dgtln" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.586995 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55ff1574-a937-497e-9e7a-0589ee9732bf-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dgtln\" (UID: \"55ff1574-a937-497e-9e7a-0589ee9732bf\") " pod="openstack/nova-cell0-conductor-db-sync-dgtln" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.603998 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwzc5\" (UniqueName: \"kubernetes.io/projected/55ff1574-a937-497e-9e7a-0589ee9732bf-kube-api-access-nwzc5\") pod \"nova-cell0-conductor-db-sync-dgtln\" (UID: \"55ff1574-a937-497e-9e7a-0589ee9732bf\") " pod="openstack/nova-cell0-conductor-db-sync-dgtln" Dec 12 16:14:03 crc kubenswrapper[4693]: I1212 16:14:03.740855 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dgtln" Dec 12 16:14:04 crc kubenswrapper[4693]: I1212 16:14:04.179864 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a506e418-94c0-4a42-8493-6badc9bf052d","Type":"ContainerStarted","Data":"90806b2bfce572062add00ec74daa65b9d3762e408fe8f4c137c75e68887cf1f"} Dec 12 16:14:04 crc kubenswrapper[4693]: I1212 16:14:04.180397 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a506e418-94c0-4a42-8493-6badc9bf052d" containerName="ceilometer-central-agent" containerID="cri-o://e1aeabaf6319e4dfebd7ac659467b091356fd745f71e73487af8bfb3ccb5d02d" gracePeriod=30 Dec 12 16:14:04 crc kubenswrapper[4693]: I1212 16:14:04.180499 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 12 16:14:04 crc kubenswrapper[4693]: I1212 16:14:04.180919 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a506e418-94c0-4a42-8493-6badc9bf052d" containerName="proxy-httpd" containerID="cri-o://90806b2bfce572062add00ec74daa65b9d3762e408fe8f4c137c75e68887cf1f" gracePeriod=30 Dec 12 16:14:04 crc kubenswrapper[4693]: I1212 16:14:04.180978 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a506e418-94c0-4a42-8493-6badc9bf052d" containerName="sg-core" containerID="cri-o://4f63f864b3a3fbd459da9a134c1f86dc68339ec379d4956b4a37e76fcfd19922" gracePeriod=30 Dec 12 16:14:04 crc kubenswrapper[4693]: I1212 16:14:04.181013 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a506e418-94c0-4a42-8493-6badc9bf052d" containerName="ceilometer-notification-agent" containerID="cri-o://0c5519487ed40fbd0cccf841ed1b7f45365dd6fc2141ce16d9d4a4f5e93319b8" gracePeriod=30 Dec 12 16:14:04 crc kubenswrapper[4693]: I1212 16:14:04.214163 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.8250638869999998 podStartE2EDuration="10.214142531s" podCreationTimestamp="2025-12-12 16:13:54 +0000 UTC" firstStartedPulling="2025-12-12 16:13:56.527346829 +0000 UTC m=+1663.695986430" lastFinishedPulling="2025-12-12 16:14:02.916425473 +0000 UTC m=+1670.085065074" observedRunningTime="2025-12-12 16:14:04.203759141 +0000 UTC m=+1671.372398732" watchObservedRunningTime="2025-12-12 16:14:04.214142531 +0000 UTC m=+1671.382782132" Dec 12 16:14:04 crc kubenswrapper[4693]: I1212 16:14:04.305567 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dgtln"] Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.211854 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dgtln" event={"ID":"55ff1574-a937-497e-9e7a-0589ee9732bf","Type":"ContainerStarted","Data":"c6ea0cc0074847d5c9bb28805757e5d97885e8bfadd88370e9143738d71e88b5"} Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.219735 4693 generic.go:334] "Generic (PLEG): container finished" podID="a506e418-94c0-4a42-8493-6badc9bf052d" containerID="90806b2bfce572062add00ec74daa65b9d3762e408fe8f4c137c75e68887cf1f" exitCode=0 Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.219776 4693 generic.go:334] "Generic (PLEG): container finished" podID="a506e418-94c0-4a42-8493-6badc9bf052d" containerID="4f63f864b3a3fbd459da9a134c1f86dc68339ec379d4956b4a37e76fcfd19922" exitCode=2 Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.219786 4693 generic.go:334] "Generic (PLEG): container finished" podID="a506e418-94c0-4a42-8493-6badc9bf052d" containerID="0c5519487ed40fbd0cccf841ed1b7f45365dd6fc2141ce16d9d4a4f5e93319b8" exitCode=0 Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.219795 4693 generic.go:334] "Generic (PLEG): container finished" podID="a506e418-94c0-4a42-8493-6badc9bf052d" containerID="e1aeabaf6319e4dfebd7ac659467b091356fd745f71e73487af8bfb3ccb5d02d" exitCode=0 Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.219820 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a506e418-94c0-4a42-8493-6badc9bf052d","Type":"ContainerDied","Data":"90806b2bfce572062add00ec74daa65b9d3762e408fe8f4c137c75e68887cf1f"} Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.219860 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a506e418-94c0-4a42-8493-6badc9bf052d","Type":"ContainerDied","Data":"4f63f864b3a3fbd459da9a134c1f86dc68339ec379d4956b4a37e76fcfd19922"} Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.219872 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a506e418-94c0-4a42-8493-6badc9bf052d","Type":"ContainerDied","Data":"0c5519487ed40fbd0cccf841ed1b7f45365dd6fc2141ce16d9d4a4f5e93319b8"} Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.219883 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a506e418-94c0-4a42-8493-6badc9bf052d","Type":"ContainerDied","Data":"e1aeabaf6319e4dfebd7ac659467b091356fd745f71e73487af8bfb3ccb5d02d"} Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.653617 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.720692 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.727679 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.793247 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-scripts\") pod \"a506e418-94c0-4a42-8493-6badc9bf052d\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.793335 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-sg-core-conf-yaml\") pod \"a506e418-94c0-4a42-8493-6badc9bf052d\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.793452 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twz98\" (UniqueName: \"kubernetes.io/projected/a506e418-94c0-4a42-8493-6badc9bf052d-kube-api-access-twz98\") pod \"a506e418-94c0-4a42-8493-6badc9bf052d\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.793474 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-combined-ca-bundle\") pod \"a506e418-94c0-4a42-8493-6badc9bf052d\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.793632 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a506e418-94c0-4a42-8493-6badc9bf052d-log-httpd\") pod \"a506e418-94c0-4a42-8493-6badc9bf052d\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.793727 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a506e418-94c0-4a42-8493-6badc9bf052d-run-httpd\") pod \"a506e418-94c0-4a42-8493-6badc9bf052d\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.793914 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-config-data\") pod \"a506e418-94c0-4a42-8493-6badc9bf052d\" (UID: \"a506e418-94c0-4a42-8493-6badc9bf052d\") " Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.798436 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a506e418-94c0-4a42-8493-6badc9bf052d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a506e418-94c0-4a42-8493-6badc9bf052d" (UID: "a506e418-94c0-4a42-8493-6badc9bf052d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.802990 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a506e418-94c0-4a42-8493-6badc9bf052d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a506e418-94c0-4a42-8493-6badc9bf052d" (UID: "a506e418-94c0-4a42-8493-6badc9bf052d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.809149 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-75f68d46b9-kwdl9"] Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.817644 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a506e418-94c0-4a42-8493-6badc9bf052d-kube-api-access-twz98" (OuterVolumeSpecName: "kube-api-access-twz98") pod "a506e418-94c0-4a42-8493-6badc9bf052d" (UID: "a506e418-94c0-4a42-8493-6badc9bf052d"). InnerVolumeSpecName "kube-api-access-twz98". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.857635 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-scripts" (OuterVolumeSpecName: "scripts") pod "a506e418-94c0-4a42-8493-6badc9bf052d" (UID: "a506e418-94c0-4a42-8493-6badc9bf052d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.859233 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-d58f4676c-v22mp"] Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.891514 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a506e418-94c0-4a42-8493-6badc9bf052d" (UID: "a506e418-94c0-4a42-8493-6badc9bf052d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.898522 4693 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a506e418-94c0-4a42-8493-6badc9bf052d-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.898564 4693 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a506e418-94c0-4a42-8493-6badc9bf052d-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.898574 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.898584 4693 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:05 crc kubenswrapper[4693]: I1212 16:14:05.898596 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twz98\" (UniqueName: \"kubernetes.io/projected/a506e418-94c0-4a42-8493-6badc9bf052d-kube-api-access-twz98\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.061230 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-config-data" (OuterVolumeSpecName: "config-data") pod "a506e418-94c0-4a42-8493-6badc9bf052d" (UID: "a506e418-94c0-4a42-8493-6badc9bf052d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.103695 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a506e418-94c0-4a42-8493-6badc9bf052d" (UID: "a506e418-94c0-4a42-8493-6badc9bf052d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.116372 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.116415 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a506e418-94c0-4a42-8493-6badc9bf052d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.272602 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-75f68d46b9-kwdl9" event={"ID":"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a","Type":"ContainerDied","Data":"47998b7876c521ed7b28abd595693e0241e7daf7ad830e39b8e2bdb96e123e8b"} Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.272665 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47998b7876c521ed7b28abd595693e0241e7daf7ad830e39b8e2bdb96e123e8b" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.277564 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"aa5c116e-ba6c-42ba-b865-b32b51104014","Type":"ContainerStarted","Data":"0048c0de7e78883fb9b2c34f829f07bdbe504635be0c9325bf32aa269662a114"} Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.288062 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a506e418-94c0-4a42-8493-6badc9bf052d","Type":"ContainerDied","Data":"5a1052972e3cdbc3601b62fd2da8370b3ce896af3e67b919d0537bbc7fc0dd27"} Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.288134 4693 scope.go:117] "RemoveContainer" containerID="90806b2bfce572062add00ec74daa65b9d3762e408fe8f4c137c75e68887cf1f" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.288370 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.301308 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.850151367 podStartE2EDuration="34.301269054s" podCreationTimestamp="2025-12-12 16:13:32 +0000 UTC" firstStartedPulling="2025-12-12 16:13:33.498039996 +0000 UTC m=+1640.666679597" lastFinishedPulling="2025-12-12 16:14:04.949157683 +0000 UTC m=+1672.117797284" observedRunningTime="2025-12-12 16:14:06.299303962 +0000 UTC m=+1673.467943573" watchObservedRunningTime="2025-12-12 16:14:06.301269054 +0000 UTC m=+1673.469908645" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.357691 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:14:06 crc kubenswrapper[4693]: E1212 16:14:06.357959 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.372497 4693 scope.go:117] "RemoveContainer" containerID="4f63f864b3a3fbd459da9a134c1f86dc68339ec379d4956b4a37e76fcfd19922" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.376657 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-75f68d46b9-kwdl9" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.403510 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.422554 4693 scope.go:117] "RemoveContainer" containerID="0c5519487ed40fbd0cccf841ed1b7f45365dd6fc2141ce16d9d4a4f5e93319b8" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.433494 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.460911 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:06 crc kubenswrapper[4693]: E1212 16:14:06.461689 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a506e418-94c0-4a42-8493-6badc9bf052d" containerName="sg-core" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.461711 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="a506e418-94c0-4a42-8493-6badc9bf052d" containerName="sg-core" Dec 12 16:14:06 crc kubenswrapper[4693]: E1212 16:14:06.461731 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a506e418-94c0-4a42-8493-6badc9bf052d" containerName="ceilometer-central-agent" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.461738 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="a506e418-94c0-4a42-8493-6badc9bf052d" containerName="ceilometer-central-agent" Dec 12 16:14:06 crc kubenswrapper[4693]: E1212 16:14:06.461777 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a" containerName="heat-api" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.461782 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a" containerName="heat-api" Dec 12 16:14:06 crc kubenswrapper[4693]: E1212 16:14:06.461796 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a" containerName="heat-api" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.461803 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a" containerName="heat-api" Dec 12 16:14:06 crc kubenswrapper[4693]: E1212 16:14:06.461818 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a506e418-94c0-4a42-8493-6badc9bf052d" containerName="ceilometer-notification-agent" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.461824 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="a506e418-94c0-4a42-8493-6badc9bf052d" containerName="ceilometer-notification-agent" Dec 12 16:14:06 crc kubenswrapper[4693]: E1212 16:14:06.461838 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a506e418-94c0-4a42-8493-6badc9bf052d" containerName="proxy-httpd" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.461845 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="a506e418-94c0-4a42-8493-6badc9bf052d" containerName="proxy-httpd" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.462067 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="a506e418-94c0-4a42-8493-6badc9bf052d" containerName="proxy-httpd" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.462083 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="a506e418-94c0-4a42-8493-6badc9bf052d" containerName="ceilometer-central-agent" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.462090 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a" containerName="heat-api" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.462109 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="a506e418-94c0-4a42-8493-6badc9bf052d" containerName="sg-core" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.462119 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="a506e418-94c0-4a42-8493-6badc9bf052d" containerName="ceilometer-notification-agent" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.462137 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a" containerName="heat-api" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.466435 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.473938 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.474376 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.475394 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.475743 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d58f4676c-v22mp" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.498468 4693 scope.go:117] "RemoveContainer" containerID="e1aeabaf6319e4dfebd7ac659467b091356fd745f71e73487af8bfb3ccb5d02d" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.526621 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-config-data\") pod \"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a\" (UID: \"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a\") " Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.526806 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqdp8\" (UniqueName: \"kubernetes.io/projected/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-kube-api-access-hqdp8\") pod \"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a\" (UID: \"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a\") " Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.526846 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-config-data-custom\") pod \"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a\" (UID: \"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a\") " Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.526978 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-combined-ca-bundle\") pod \"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a\" (UID: \"77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a\") " Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.534983 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a" (UID: "77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.536636 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-kube-api-access-hqdp8" (OuterVolumeSpecName: "kube-api-access-hqdp8") pod "77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a" (UID: "77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a"). InnerVolumeSpecName "kube-api-access-hqdp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.584526 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a" (UID: "77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.624391 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-config-data" (OuterVolumeSpecName: "config-data") pod "77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a" (UID: "77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.630770 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c84420b-2157-4931-86de-b1c55c4e7f0c-combined-ca-bundle\") pod \"6c84420b-2157-4931-86de-b1c55c4e7f0c\" (UID: \"6c84420b-2157-4931-86de-b1c55c4e7f0c\") " Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.631047 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c84420b-2157-4931-86de-b1c55c4e7f0c-config-data-custom\") pod \"6c84420b-2157-4931-86de-b1c55c4e7f0c\" (UID: \"6c84420b-2157-4931-86de-b1c55c4e7f0c\") " Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.631106 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjcw9\" (UniqueName: \"kubernetes.io/projected/6c84420b-2157-4931-86de-b1c55c4e7f0c-kube-api-access-kjcw9\") pod \"6c84420b-2157-4931-86de-b1c55c4e7f0c\" (UID: \"6c84420b-2157-4931-86de-b1c55c4e7f0c\") " Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.631197 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c84420b-2157-4931-86de-b1c55c4e7f0c-config-data\") pod \"6c84420b-2157-4931-86de-b1c55c4e7f0c\" (UID: \"6c84420b-2157-4931-86de-b1c55c4e7f0c\") " Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.632206 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-scripts\") pod \"ceilometer-0\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.632426 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-config-data\") pod \"ceilometer-0\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.632512 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f207ed2-6946-4bd3-aa61-f547ec78f81a-run-httpd\") pod \"ceilometer-0\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.632613 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f207ed2-6946-4bd3-aa61-f547ec78f81a-log-httpd\") pod \"ceilometer-0\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.632660 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.632731 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.632791 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvmwl\" (UniqueName: \"kubernetes.io/projected/3f207ed2-6946-4bd3-aa61-f547ec78f81a-kube-api-access-wvmwl\") pod \"ceilometer-0\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.633023 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqdp8\" (UniqueName: \"kubernetes.io/projected/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-kube-api-access-hqdp8\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.633050 4693 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.633065 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.633078 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.635577 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c84420b-2157-4931-86de-b1c55c4e7f0c-kube-api-access-kjcw9" (OuterVolumeSpecName: "kube-api-access-kjcw9") pod "6c84420b-2157-4931-86de-b1c55c4e7f0c" (UID: "6c84420b-2157-4931-86de-b1c55c4e7f0c"). InnerVolumeSpecName "kube-api-access-kjcw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.635930 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c84420b-2157-4931-86de-b1c55c4e7f0c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6c84420b-2157-4931-86de-b1c55c4e7f0c" (UID: "6c84420b-2157-4931-86de-b1c55c4e7f0c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.681471 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c84420b-2157-4931-86de-b1c55c4e7f0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c84420b-2157-4931-86de-b1c55c4e7f0c" (UID: "6c84420b-2157-4931-86de-b1c55c4e7f0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.701582 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c84420b-2157-4931-86de-b1c55c4e7f0c-config-data" (OuterVolumeSpecName: "config-data") pod "6c84420b-2157-4931-86de-b1c55c4e7f0c" (UID: "6c84420b-2157-4931-86de-b1c55c4e7f0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.735501 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-scripts\") pod \"ceilometer-0\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.735836 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-config-data\") pod \"ceilometer-0\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.735970 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f207ed2-6946-4bd3-aa61-f547ec78f81a-run-httpd\") pod \"ceilometer-0\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.736129 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f207ed2-6946-4bd3-aa61-f547ec78f81a-log-httpd\") pod \"ceilometer-0\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.736287 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.736468 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.736592 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvmwl\" (UniqueName: \"kubernetes.io/projected/3f207ed2-6946-4bd3-aa61-f547ec78f81a-kube-api-access-wvmwl\") pod \"ceilometer-0\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.736794 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f207ed2-6946-4bd3-aa61-f547ec78f81a-run-httpd\") pod \"ceilometer-0\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.736874 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f207ed2-6946-4bd3-aa61-f547ec78f81a-log-httpd\") pod \"ceilometer-0\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.737080 4693 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c84420b-2157-4931-86de-b1c55c4e7f0c-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.737114 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjcw9\" (UniqueName: \"kubernetes.io/projected/6c84420b-2157-4931-86de-b1c55c4e7f0c-kube-api-access-kjcw9\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.737131 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c84420b-2157-4931-86de-b1c55c4e7f0c-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.737148 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c84420b-2157-4931-86de-b1c55c4e7f0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.741049 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.741207 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.742013 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-scripts\") pod \"ceilometer-0\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.742033 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-config-data\") pod \"ceilometer-0\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.754346 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvmwl\" (UniqueName: \"kubernetes.io/projected/3f207ed2-6946-4bd3-aa61-f547ec78f81a-kube-api-access-wvmwl\") pod \"ceilometer-0\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " pod="openstack/ceilometer-0" Dec 12 16:14:06 crc kubenswrapper[4693]: I1212 16:14:06.800669 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:14:07 crc kubenswrapper[4693]: I1212 16:14:07.304498 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-75f68d46b9-kwdl9" Dec 12 16:14:07 crc kubenswrapper[4693]: I1212 16:14:07.306473 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d58f4676c-v22mp" event={"ID":"6c84420b-2157-4931-86de-b1c55c4e7f0c","Type":"ContainerDied","Data":"335c7a912d6a5960902bcafd9c0cf82131d76041cf69ffc6935e56d41dc1b20f"} Dec 12 16:14:07 crc kubenswrapper[4693]: I1212 16:14:07.306566 4693 scope.go:117] "RemoveContainer" containerID="bdcfdb491f018b00a2a3a3a9b65a0fd70049cb3a2183e8991e886bff47c98d18" Dec 12 16:14:07 crc kubenswrapper[4693]: I1212 16:14:07.306505 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d58f4676c-v22mp" Dec 12 16:14:07 crc kubenswrapper[4693]: I1212 16:14:07.348072 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:07 crc kubenswrapper[4693]: I1212 16:14:07.383214 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a506e418-94c0-4a42-8493-6badc9bf052d" path="/var/lib/kubelet/pods/a506e418-94c0-4a42-8493-6badc9bf052d/volumes" Dec 12 16:14:07 crc kubenswrapper[4693]: I1212 16:14:07.454005 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-75f68d46b9-kwdl9"] Dec 12 16:14:07 crc kubenswrapper[4693]: I1212 16:14:07.482166 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-75f68d46b9-kwdl9"] Dec 12 16:14:07 crc kubenswrapper[4693]: I1212 16:14:07.513335 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-d58f4676c-v22mp"] Dec 12 16:14:07 crc kubenswrapper[4693]: I1212 16:14:07.544971 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-d58f4676c-v22mp"] Dec 12 16:14:08 crc kubenswrapper[4693]: I1212 16:14:08.335379 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f207ed2-6946-4bd3-aa61-f547ec78f81a","Type":"ContainerStarted","Data":"73a8420719d2ef0bbc65ed954eb63e325fd2d8ac1f47bf0cdc787440612e2b07"} Dec 12 16:14:09 crc kubenswrapper[4693]: I1212 16:14:09.393463 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c84420b-2157-4931-86de-b1c55c4e7f0c" path="/var/lib/kubelet/pods/6c84420b-2157-4931-86de-b1c55c4e7f0c/volumes" Dec 12 16:14:09 crc kubenswrapper[4693]: I1212 16:14:09.394518 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a" path="/var/lib/kubelet/pods/77a76ee1-ef0d-417d-86a4-4bb9d72ebe2a/volumes" Dec 12 16:14:09 crc kubenswrapper[4693]: I1212 16:14:09.395143 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f207ed2-6946-4bd3-aa61-f547ec78f81a","Type":"ContainerStarted","Data":"f7a9a844841023aaa0e607716c1bb6c1f039392c8daabf0ec5548e3f7cf25026"} Dec 12 16:14:10 crc kubenswrapper[4693]: I1212 16:14:10.379118 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f207ed2-6946-4bd3-aa61-f547ec78f81a","Type":"ContainerStarted","Data":"6c6097b60ae0176314efc64cabfb61803595967272016862522e785fbfe40189"} Dec 12 16:14:11 crc kubenswrapper[4693]: I1212 16:14:11.398481 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f207ed2-6946-4bd3-aa61-f547ec78f81a","Type":"ContainerStarted","Data":"096f15413994aa49ef0d3979dccf8ccac51a01efd32309497000d4752755db49"} Dec 12 16:14:13 crc kubenswrapper[4693]: I1212 16:14:13.087353 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:13 crc kubenswrapper[4693]: I1212 16:14:13.440404 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f207ed2-6946-4bd3-aa61-f547ec78f81a","Type":"ContainerStarted","Data":"48a71ca6173c4e563bad03831cb7071253d9c6f3cce10a258740302d08d214b8"} Dec 12 16:14:13 crc kubenswrapper[4693]: I1212 16:14:13.441052 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 12 16:14:13 crc kubenswrapper[4693]: I1212 16:14:13.473745 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.298545931 podStartE2EDuration="7.473721121s" podCreationTimestamp="2025-12-12 16:14:06 +0000 UTC" firstStartedPulling="2025-12-12 16:14:07.383963709 +0000 UTC m=+1674.552603310" lastFinishedPulling="2025-12-12 16:14:12.559138899 +0000 UTC m=+1679.727778500" observedRunningTime="2025-12-12 16:14:13.463852636 +0000 UTC m=+1680.632492237" watchObservedRunningTime="2025-12-12 16:14:13.473721121 +0000 UTC m=+1680.642360722" Dec 12 16:14:14 crc kubenswrapper[4693]: I1212 16:14:14.454123 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f207ed2-6946-4bd3-aa61-f547ec78f81a" containerName="ceilometer-central-agent" containerID="cri-o://f7a9a844841023aaa0e607716c1bb6c1f039392c8daabf0ec5548e3f7cf25026" gracePeriod=30 Dec 12 16:14:14 crc kubenswrapper[4693]: I1212 16:14:14.455061 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f207ed2-6946-4bd3-aa61-f547ec78f81a" containerName="proxy-httpd" containerID="cri-o://48a71ca6173c4e563bad03831cb7071253d9c6f3cce10a258740302d08d214b8" gracePeriod=30 Dec 12 16:14:14 crc kubenswrapper[4693]: I1212 16:14:14.455097 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f207ed2-6946-4bd3-aa61-f547ec78f81a" containerName="sg-core" containerID="cri-o://096f15413994aa49ef0d3979dccf8ccac51a01efd32309497000d4752755db49" gracePeriod=30 Dec 12 16:14:14 crc kubenswrapper[4693]: I1212 16:14:14.455109 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f207ed2-6946-4bd3-aa61-f547ec78f81a" containerName="ceilometer-notification-agent" containerID="cri-o://6c6097b60ae0176314efc64cabfb61803595967272016862522e785fbfe40189" gracePeriod=30 Dec 12 16:14:15 crc kubenswrapper[4693]: I1212 16:14:15.511101 4693 generic.go:334] "Generic (PLEG): container finished" podID="3f207ed2-6946-4bd3-aa61-f547ec78f81a" containerID="48a71ca6173c4e563bad03831cb7071253d9c6f3cce10a258740302d08d214b8" exitCode=0 Dec 12 16:14:15 crc kubenswrapper[4693]: I1212 16:14:15.511542 4693 generic.go:334] "Generic (PLEG): container finished" podID="3f207ed2-6946-4bd3-aa61-f547ec78f81a" containerID="096f15413994aa49ef0d3979dccf8ccac51a01efd32309497000d4752755db49" exitCode=2 Dec 12 16:14:15 crc kubenswrapper[4693]: I1212 16:14:15.511553 4693 generic.go:334] "Generic (PLEG): container finished" podID="3f207ed2-6946-4bd3-aa61-f547ec78f81a" containerID="6c6097b60ae0176314efc64cabfb61803595967272016862522e785fbfe40189" exitCode=0 Dec 12 16:14:15 crc kubenswrapper[4693]: I1212 16:14:15.511579 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f207ed2-6946-4bd3-aa61-f547ec78f81a","Type":"ContainerDied","Data":"48a71ca6173c4e563bad03831cb7071253d9c6f3cce10a258740302d08d214b8"} Dec 12 16:14:15 crc kubenswrapper[4693]: I1212 16:14:15.511613 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f207ed2-6946-4bd3-aa61-f547ec78f81a","Type":"ContainerDied","Data":"096f15413994aa49ef0d3979dccf8ccac51a01efd32309497000d4752755db49"} Dec 12 16:14:15 crc kubenswrapper[4693]: I1212 16:14:15.511624 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f207ed2-6946-4bd3-aa61-f547ec78f81a","Type":"ContainerDied","Data":"6c6097b60ae0176314efc64cabfb61803595967272016862522e785fbfe40189"} Dec 12 16:14:15 crc kubenswrapper[4693]: I1212 16:14:15.561653 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-776c75b6d4-cbj4w" Dec 12 16:14:15 crc kubenswrapper[4693]: I1212 16:14:15.648730 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-85fcc456ff-kvgm6"] Dec 12 16:14:15 crc kubenswrapper[4693]: I1212 16:14:15.649026 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-85fcc456ff-kvgm6" podUID="173f6b35-611d-436f-839c-64b2bee96977" containerName="heat-engine" containerID="cri-o://83a5da81c826852251080b7b175edc2d7b3d42fb08198e52ea4c4220ccec4cda" gracePeriod=60 Dec 12 16:14:18 crc kubenswrapper[4693]: E1212 16:14:18.470628 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83a5da81c826852251080b7b175edc2d7b3d42fb08198e52ea4c4220ccec4cda" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 12 16:14:18 crc kubenswrapper[4693]: E1212 16:14:18.472773 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83a5da81c826852251080b7b175edc2d7b3d42fb08198e52ea4c4220ccec4cda" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 12 16:14:18 crc kubenswrapper[4693]: E1212 16:14:18.474468 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83a5da81c826852251080b7b175edc2d7b3d42fb08198e52ea4c4220ccec4cda" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 12 16:14:18 crc kubenswrapper[4693]: E1212 16:14:18.474511 4693 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-85fcc456ff-kvgm6" podUID="173f6b35-611d-436f-839c-64b2bee96977" containerName="heat-engine" Dec 12 16:14:20 crc kubenswrapper[4693]: I1212 16:14:20.357899 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:14:20 crc kubenswrapper[4693]: E1212 16:14:20.358535 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:14:21 crc kubenswrapper[4693]: I1212 16:14:21.657526 4693 generic.go:334] "Generic (PLEG): container finished" podID="3f207ed2-6946-4bd3-aa61-f547ec78f81a" containerID="f7a9a844841023aaa0e607716c1bb6c1f039392c8daabf0ec5548e3f7cf25026" exitCode=0 Dec 12 16:14:21 crc kubenswrapper[4693]: I1212 16:14:21.657610 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f207ed2-6946-4bd3-aa61-f547ec78f81a","Type":"ContainerDied","Data":"f7a9a844841023aaa0e607716c1bb6c1f039392c8daabf0ec5548e3f7cf25026"} Dec 12 16:14:22 crc kubenswrapper[4693]: I1212 16:14:22.795352 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:14:22 crc kubenswrapper[4693]: I1212 16:14:22.880429 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-scripts\") pod \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " Dec 12 16:14:22 crc kubenswrapper[4693]: I1212 16:14:22.880830 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f207ed2-6946-4bd3-aa61-f547ec78f81a-log-httpd\") pod \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " Dec 12 16:14:22 crc kubenswrapper[4693]: I1212 16:14:22.881075 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f207ed2-6946-4bd3-aa61-f547ec78f81a-run-httpd\") pod \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " Dec 12 16:14:22 crc kubenswrapper[4693]: I1212 16:14:22.881222 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-config-data\") pod \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " Dec 12 16:14:22 crc kubenswrapper[4693]: I1212 16:14:22.881406 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-combined-ca-bundle\") pod \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " Dec 12 16:14:22 crc kubenswrapper[4693]: I1212 16:14:22.881495 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f207ed2-6946-4bd3-aa61-f547ec78f81a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3f207ed2-6946-4bd3-aa61-f547ec78f81a" (UID: "3f207ed2-6946-4bd3-aa61-f547ec78f81a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:14:22 crc kubenswrapper[4693]: I1212 16:14:22.881667 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-sg-core-conf-yaml\") pod \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " Dec 12 16:14:22 crc kubenswrapper[4693]: I1212 16:14:22.881923 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvmwl\" (UniqueName: \"kubernetes.io/projected/3f207ed2-6946-4bd3-aa61-f547ec78f81a-kube-api-access-wvmwl\") pod \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\" (UID: \"3f207ed2-6946-4bd3-aa61-f547ec78f81a\") " Dec 12 16:14:22 crc kubenswrapper[4693]: I1212 16:14:22.882608 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f207ed2-6946-4bd3-aa61-f547ec78f81a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3f207ed2-6946-4bd3-aa61-f547ec78f81a" (UID: "3f207ed2-6946-4bd3-aa61-f547ec78f81a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:14:22 crc kubenswrapper[4693]: I1212 16:14:22.886503 4693 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f207ed2-6946-4bd3-aa61-f547ec78f81a-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:22 crc kubenswrapper[4693]: I1212 16:14:22.886660 4693 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f207ed2-6946-4bd3-aa61-f547ec78f81a-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:22 crc kubenswrapper[4693]: I1212 16:14:22.889829 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-scripts" (OuterVolumeSpecName: "scripts") pod "3f207ed2-6946-4bd3-aa61-f547ec78f81a" (UID: "3f207ed2-6946-4bd3-aa61-f547ec78f81a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:22 crc kubenswrapper[4693]: I1212 16:14:22.894243 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f207ed2-6946-4bd3-aa61-f547ec78f81a-kube-api-access-wvmwl" (OuterVolumeSpecName: "kube-api-access-wvmwl") pod "3f207ed2-6946-4bd3-aa61-f547ec78f81a" (UID: "3f207ed2-6946-4bd3-aa61-f547ec78f81a"). InnerVolumeSpecName "kube-api-access-wvmwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:14:22 crc kubenswrapper[4693]: I1212 16:14:22.924434 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3f207ed2-6946-4bd3-aa61-f547ec78f81a" (UID: "3f207ed2-6946-4bd3-aa61-f547ec78f81a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:22 crc kubenswrapper[4693]: I1212 16:14:22.991355 4693 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:22 crc kubenswrapper[4693]: I1212 16:14:22.991399 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvmwl\" (UniqueName: \"kubernetes.io/projected/3f207ed2-6946-4bd3-aa61-f547ec78f81a-kube-api-access-wvmwl\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:22 crc kubenswrapper[4693]: I1212 16:14:22.991430 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.004600 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f207ed2-6946-4bd3-aa61-f547ec78f81a" (UID: "3f207ed2-6946-4bd3-aa61-f547ec78f81a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.055965 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-config-data" (OuterVolumeSpecName: "config-data") pod "3f207ed2-6946-4bd3-aa61-f547ec78f81a" (UID: "3f207ed2-6946-4bd3-aa61-f547ec78f81a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.095313 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.095369 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f207ed2-6946-4bd3-aa61-f547ec78f81a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.518806 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-85fcc456ff-kvgm6" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.624837 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173f6b35-611d-436f-839c-64b2bee96977-combined-ca-bundle\") pod \"173f6b35-611d-436f-839c-64b2bee96977\" (UID: \"173f6b35-611d-436f-839c-64b2bee96977\") " Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.625363 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tblwq\" (UniqueName: \"kubernetes.io/projected/173f6b35-611d-436f-839c-64b2bee96977-kube-api-access-tblwq\") pod \"173f6b35-611d-436f-839c-64b2bee96977\" (UID: \"173f6b35-611d-436f-839c-64b2bee96977\") " Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.625432 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/173f6b35-611d-436f-839c-64b2bee96977-config-data-custom\") pod \"173f6b35-611d-436f-839c-64b2bee96977\" (UID: \"173f6b35-611d-436f-839c-64b2bee96977\") " Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.625504 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173f6b35-611d-436f-839c-64b2bee96977-config-data\") pod \"173f6b35-611d-436f-839c-64b2bee96977\" (UID: \"173f6b35-611d-436f-839c-64b2bee96977\") " Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.632160 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/173f6b35-611d-436f-839c-64b2bee96977-kube-api-access-tblwq" (OuterVolumeSpecName: "kube-api-access-tblwq") pod "173f6b35-611d-436f-839c-64b2bee96977" (UID: "173f6b35-611d-436f-839c-64b2bee96977"). InnerVolumeSpecName "kube-api-access-tblwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.632824 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/173f6b35-611d-436f-839c-64b2bee96977-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "173f6b35-611d-436f-839c-64b2bee96977" (UID: "173f6b35-611d-436f-839c-64b2bee96977"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.660088 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/173f6b35-611d-436f-839c-64b2bee96977-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "173f6b35-611d-436f-839c-64b2bee96977" (UID: "173f6b35-611d-436f-839c-64b2bee96977"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.691584 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f207ed2-6946-4bd3-aa61-f547ec78f81a","Type":"ContainerDied","Data":"73a8420719d2ef0bbc65ed954eb63e325fd2d8ac1f47bf0cdc787440612e2b07"} Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.692584 4693 scope.go:117] "RemoveContainer" containerID="48a71ca6173c4e563bad03831cb7071253d9c6f3cce10a258740302d08d214b8" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.691835 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.695221 4693 generic.go:334] "Generic (PLEG): container finished" podID="173f6b35-611d-436f-839c-64b2bee96977" containerID="83a5da81c826852251080b7b175edc2d7b3d42fb08198e52ea4c4220ccec4cda" exitCode=0 Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.695305 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-85fcc456ff-kvgm6" event={"ID":"173f6b35-611d-436f-839c-64b2bee96977","Type":"ContainerDied","Data":"83a5da81c826852251080b7b175edc2d7b3d42fb08198e52ea4c4220ccec4cda"} Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.695339 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-85fcc456ff-kvgm6" event={"ID":"173f6b35-611d-436f-839c-64b2bee96977","Type":"ContainerDied","Data":"dbf9f7361d7476c540dae6f45530f3431c97b288891922aa885e128b93a95085"} Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.695457 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-85fcc456ff-kvgm6" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.697818 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dgtln" event={"ID":"55ff1574-a937-497e-9e7a-0589ee9732bf","Type":"ContainerStarted","Data":"2f895418ec16da8034a1cb9c42a0c3824a1c993b70c6b7cef24a018c83809510"} Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.712617 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/173f6b35-611d-436f-839c-64b2bee96977-config-data" (OuterVolumeSpecName: "config-data") pod "173f6b35-611d-436f-839c-64b2bee96977" (UID: "173f6b35-611d-436f-839c-64b2bee96977"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.728250 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-dgtln" podStartSLOduration=2.635278446 podStartE2EDuration="20.728221435s" podCreationTimestamp="2025-12-12 16:14:03 +0000 UTC" firstStartedPulling="2025-12-12 16:14:04.319586187 +0000 UTC m=+1671.488225788" lastFinishedPulling="2025-12-12 16:14:22.412529176 +0000 UTC m=+1689.581168777" observedRunningTime="2025-12-12 16:14:23.719504169 +0000 UTC m=+1690.888143770" watchObservedRunningTime="2025-12-12 16:14:23.728221435 +0000 UTC m=+1690.896861036" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.728619 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173f6b35-611d-436f-839c-64b2bee96977-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.728664 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tblwq\" (UniqueName: \"kubernetes.io/projected/173f6b35-611d-436f-839c-64b2bee96977-kube-api-access-tblwq\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.728679 4693 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/173f6b35-611d-436f-839c-64b2bee96977-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.728690 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173f6b35-611d-436f-839c-64b2bee96977-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.833926 4693 scope.go:117] "RemoveContainer" containerID="096f15413994aa49ef0d3979dccf8ccac51a01efd32309497000d4752755db49" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.850009 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.864128 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.878639 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:23 crc kubenswrapper[4693]: E1212 16:14:23.879498 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c84420b-2157-4931-86de-b1c55c4e7f0c" containerName="heat-cfnapi" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.879530 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c84420b-2157-4931-86de-b1c55c4e7f0c" containerName="heat-cfnapi" Dec 12 16:14:23 crc kubenswrapper[4693]: E1212 16:14:23.879550 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f207ed2-6946-4bd3-aa61-f547ec78f81a" containerName="sg-core" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.879562 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f207ed2-6946-4bd3-aa61-f547ec78f81a" containerName="sg-core" Dec 12 16:14:23 crc kubenswrapper[4693]: E1212 16:14:23.879578 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f207ed2-6946-4bd3-aa61-f547ec78f81a" containerName="ceilometer-central-agent" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.879586 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f207ed2-6946-4bd3-aa61-f547ec78f81a" containerName="ceilometer-central-agent" Dec 12 16:14:23 crc kubenswrapper[4693]: E1212 16:14:23.888780 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="173f6b35-611d-436f-839c-64b2bee96977" containerName="heat-engine" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.888813 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="173f6b35-611d-436f-839c-64b2bee96977" containerName="heat-engine" Dec 12 16:14:23 crc kubenswrapper[4693]: E1212 16:14:23.888846 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f207ed2-6946-4bd3-aa61-f547ec78f81a" containerName="proxy-httpd" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.888856 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f207ed2-6946-4bd3-aa61-f547ec78f81a" containerName="proxy-httpd" Dec 12 16:14:23 crc kubenswrapper[4693]: E1212 16:14:23.888970 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f207ed2-6946-4bd3-aa61-f547ec78f81a" containerName="ceilometer-notification-agent" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.888986 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f207ed2-6946-4bd3-aa61-f547ec78f81a" containerName="ceilometer-notification-agent" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.889980 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c84420b-2157-4931-86de-b1c55c4e7f0c" containerName="heat-cfnapi" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.890022 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="173f6b35-611d-436f-839c-64b2bee96977" containerName="heat-engine" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.890042 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c84420b-2157-4931-86de-b1c55c4e7f0c" containerName="heat-cfnapi" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.890060 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f207ed2-6946-4bd3-aa61-f547ec78f81a" containerName="sg-core" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.890073 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f207ed2-6946-4bd3-aa61-f547ec78f81a" containerName="ceilometer-notification-agent" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.890186 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f207ed2-6946-4bd3-aa61-f547ec78f81a" containerName="ceilometer-central-agent" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.890212 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f207ed2-6946-4bd3-aa61-f547ec78f81a" containerName="proxy-httpd" Dec 12 16:14:23 crc kubenswrapper[4693]: E1212 16:14:23.890577 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c84420b-2157-4931-86de-b1c55c4e7f0c" containerName="heat-cfnapi" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.890598 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c84420b-2157-4931-86de-b1c55c4e7f0c" containerName="heat-cfnapi" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.893293 4693 scope.go:117] "RemoveContainer" containerID="6c6097b60ae0176314efc64cabfb61803595967272016862522e785fbfe40189" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.893366 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.893311 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.900195 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.901324 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.958336 4693 scope.go:117] "RemoveContainer" containerID="f7a9a844841023aaa0e607716c1bb6c1f039392c8daabf0ec5548e3f7cf25026" Dec 12 16:14:23 crc kubenswrapper[4693]: I1212 16:14:23.987738 4693 scope.go:117] "RemoveContainer" containerID="83a5da81c826852251080b7b175edc2d7b3d42fb08198e52ea4c4220ccec4cda" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.044127 4693 scope.go:117] "RemoveContainer" containerID="83a5da81c826852251080b7b175edc2d7b3d42fb08198e52ea4c4220ccec4cda" Dec 12 16:14:24 crc kubenswrapper[4693]: E1212 16:14:24.047685 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83a5da81c826852251080b7b175edc2d7b3d42fb08198e52ea4c4220ccec4cda\": container with ID starting with 83a5da81c826852251080b7b175edc2d7b3d42fb08198e52ea4c4220ccec4cda not found: ID does not exist" containerID="83a5da81c826852251080b7b175edc2d7b3d42fb08198e52ea4c4220ccec4cda" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.047749 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83a5da81c826852251080b7b175edc2d7b3d42fb08198e52ea4c4220ccec4cda"} err="failed to get container status \"83a5da81c826852251080b7b175edc2d7b3d42fb08198e52ea4c4220ccec4cda\": rpc error: code = NotFound desc = could not find container \"83a5da81c826852251080b7b175edc2d7b3d42fb08198e52ea4c4220ccec4cda\": container with ID starting with 83a5da81c826852251080b7b175edc2d7b3d42fb08198e52ea4c4220ccec4cda not found: ID does not exist" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.047946 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-log-httpd\") pod \"ceilometer-0\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " pod="openstack/ceilometer-0" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.048037 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-run-httpd\") pod \"ceilometer-0\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " pod="openstack/ceilometer-0" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.048198 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " pod="openstack/ceilometer-0" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.048245 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " pod="openstack/ceilometer-0" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.048345 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-scripts\") pod \"ceilometer-0\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " pod="openstack/ceilometer-0" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.048392 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49nc6\" (UniqueName: \"kubernetes.io/projected/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-kube-api-access-49nc6\") pod \"ceilometer-0\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " pod="openstack/ceilometer-0" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.048442 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-config-data\") pod \"ceilometer-0\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " pod="openstack/ceilometer-0" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.060513 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-85fcc456ff-kvgm6"] Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.079529 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-85fcc456ff-kvgm6"] Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.151940 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49nc6\" (UniqueName: \"kubernetes.io/projected/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-kube-api-access-49nc6\") pod \"ceilometer-0\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " pod="openstack/ceilometer-0" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.152034 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-config-data\") pod \"ceilometer-0\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " pod="openstack/ceilometer-0" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.152120 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-log-httpd\") pod \"ceilometer-0\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " pod="openstack/ceilometer-0" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.152172 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-run-httpd\") pod \"ceilometer-0\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " pod="openstack/ceilometer-0" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.152213 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " pod="openstack/ceilometer-0" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.152240 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " pod="openstack/ceilometer-0" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.152314 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-scripts\") pod \"ceilometer-0\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " pod="openstack/ceilometer-0" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.153048 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-log-httpd\") pod \"ceilometer-0\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " pod="openstack/ceilometer-0" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.154387 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-run-httpd\") pod \"ceilometer-0\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " pod="openstack/ceilometer-0" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.158251 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-scripts\") pod \"ceilometer-0\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " pod="openstack/ceilometer-0" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.159263 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " pod="openstack/ceilometer-0" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.161008 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-config-data\") pod \"ceilometer-0\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " pod="openstack/ceilometer-0" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.161813 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " pod="openstack/ceilometer-0" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.170380 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49nc6\" (UniqueName: \"kubernetes.io/projected/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-kube-api-access-49nc6\") pod \"ceilometer-0\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " pod="openstack/ceilometer-0" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.221376 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:14:24 crc kubenswrapper[4693]: I1212 16:14:24.863338 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:25 crc kubenswrapper[4693]: I1212 16:14:25.370986 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="173f6b35-611d-436f-839c-64b2bee96977" path="/var/lib/kubelet/pods/173f6b35-611d-436f-839c-64b2bee96977/volumes" Dec 12 16:14:25 crc kubenswrapper[4693]: I1212 16:14:25.371640 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f207ed2-6946-4bd3-aa61-f547ec78f81a" path="/var/lib/kubelet/pods/3f207ed2-6946-4bd3-aa61-f547ec78f81a/volumes" Dec 12 16:14:25 crc kubenswrapper[4693]: I1212 16:14:25.766175 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a88e6cb-9df7-4a63-b5ba-15626ce5da34","Type":"ContainerStarted","Data":"36fcc3694b535bc476413f3a3b3f37ab44d8e50bb8174dbaa46ad13fb66f69a8"} Dec 12 16:14:27 crc kubenswrapper[4693]: I1212 16:14:27.797264 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a88e6cb-9df7-4a63-b5ba-15626ce5da34","Type":"ContainerStarted","Data":"45c79f7d9f1fb15fc31ae2bf45815d3d4f68c5f7cacab5c522307bc02812bebb"} Dec 12 16:14:28 crc kubenswrapper[4693]: I1212 16:14:28.808831 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a88e6cb-9df7-4a63-b5ba-15626ce5da34","Type":"ContainerStarted","Data":"ff8558d3c3230763aca363b2fcf034e5931dc486704ec2d1f06a4658364a2316"} Dec 12 16:14:29 crc kubenswrapper[4693]: I1212 16:14:29.823971 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a88e6cb-9df7-4a63-b5ba-15626ce5da34","Type":"ContainerStarted","Data":"29a316a3d2ec1b0b8f727dc20a68dc28d606e66a37af5188c0ab2a7de6fdfa8e"} Dec 12 16:14:31 crc kubenswrapper[4693]: I1212 16:14:31.359378 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:14:31 crc kubenswrapper[4693]: E1212 16:14:31.360118 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:14:31 crc kubenswrapper[4693]: I1212 16:14:31.850520 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a88e6cb-9df7-4a63-b5ba-15626ce5da34","Type":"ContainerStarted","Data":"772bad77cf960e5fedb9a3aa2dfde9537abedc50eb3d70aa6d33033cf032934b"} Dec 12 16:14:31 crc kubenswrapper[4693]: I1212 16:14:31.850879 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 12 16:14:31 crc kubenswrapper[4693]: I1212 16:14:31.874736 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.86676942 podStartE2EDuration="8.874722199s" podCreationTimestamp="2025-12-12 16:14:23 +0000 UTC" firstStartedPulling="2025-12-12 16:14:24.868685017 +0000 UTC m=+1692.037324618" lastFinishedPulling="2025-12-12 16:14:30.876637796 +0000 UTC m=+1698.045277397" observedRunningTime="2025-12-12 16:14:31.872962591 +0000 UTC m=+1699.041602192" watchObservedRunningTime="2025-12-12 16:14:31.874722199 +0000 UTC m=+1699.043361790" Dec 12 16:14:35 crc kubenswrapper[4693]: I1212 16:14:35.421590 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 16:14:35 crc kubenswrapper[4693]: I1212 16:14:35.422366 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7cdcade9-a317-47ba-a03c-0c355c06e306" containerName="glance-log" containerID="cri-o://204379f4d155ff5433acf100874cb455bd1186ef6cfeb539280d401f0ff54414" gracePeriod=30 Dec 12 16:14:35 crc kubenswrapper[4693]: I1212 16:14:35.422456 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7cdcade9-a317-47ba-a03c-0c355c06e306" containerName="glance-httpd" containerID="cri-o://02bd07d9e3f2b641563c77bf1d94bc6c73e2168b2c22db3470765a72db2401b8" gracePeriod=30 Dec 12 16:14:35 crc kubenswrapper[4693]: I1212 16:14:35.906209 4693 generic.go:334] "Generic (PLEG): container finished" podID="7cdcade9-a317-47ba-a03c-0c355c06e306" containerID="204379f4d155ff5433acf100874cb455bd1186ef6cfeb539280d401f0ff54414" exitCode=143 Dec 12 16:14:35 crc kubenswrapper[4693]: I1212 16:14:35.906497 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7cdcade9-a317-47ba-a03c-0c355c06e306","Type":"ContainerDied","Data":"204379f4d155ff5433acf100874cb455bd1186ef6cfeb539280d401f0ff54414"} Dec 12 16:14:36 crc kubenswrapper[4693]: I1212 16:14:36.524482 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 16:14:36 crc kubenswrapper[4693]: I1212 16:14:36.524714 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ad919c71-0957-489d-8ae6-a69c33ab65b5" containerName="glance-log" containerID="cri-o://28cff692fc6b5e1d81fb23e9cf6c98205139fbbf7d10adf5042d2792df004e4b" gracePeriod=30 Dec 12 16:14:36 crc kubenswrapper[4693]: I1212 16:14:36.525210 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ad919c71-0957-489d-8ae6-a69c33ab65b5" containerName="glance-httpd" containerID="cri-o://43597875dcb20f9421d48d134653503b2bd790db4ff608b14535d56ad43af9ea" gracePeriod=30 Dec 12 16:14:36 crc kubenswrapper[4693]: I1212 16:14:36.919969 4693 generic.go:334] "Generic (PLEG): container finished" podID="55ff1574-a937-497e-9e7a-0589ee9732bf" containerID="2f895418ec16da8034a1cb9c42a0c3824a1c993b70c6b7cef24a018c83809510" exitCode=0 Dec 12 16:14:36 crc kubenswrapper[4693]: I1212 16:14:36.920054 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dgtln" event={"ID":"55ff1574-a937-497e-9e7a-0589ee9732bf","Type":"ContainerDied","Data":"2f895418ec16da8034a1cb9c42a0c3824a1c993b70c6b7cef24a018c83809510"} Dec 12 16:14:36 crc kubenswrapper[4693]: I1212 16:14:36.923208 4693 generic.go:334] "Generic (PLEG): container finished" podID="ad919c71-0957-489d-8ae6-a69c33ab65b5" containerID="28cff692fc6b5e1d81fb23e9cf6c98205139fbbf7d10adf5042d2792df004e4b" exitCode=143 Dec 12 16:14:36 crc kubenswrapper[4693]: I1212 16:14:36.923258 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ad919c71-0957-489d-8ae6-a69c33ab65b5","Type":"ContainerDied","Data":"28cff692fc6b5e1d81fb23e9cf6c98205139fbbf7d10adf5042d2792df004e4b"} Dec 12 16:14:37 crc kubenswrapper[4693]: I1212 16:14:37.842659 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:37 crc kubenswrapper[4693]: I1212 16:14:37.843297 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a88e6cb-9df7-4a63-b5ba-15626ce5da34" containerName="ceilometer-central-agent" containerID="cri-o://45c79f7d9f1fb15fc31ae2bf45815d3d4f68c5f7cacab5c522307bc02812bebb" gracePeriod=30 Dec 12 16:14:37 crc kubenswrapper[4693]: I1212 16:14:37.843444 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a88e6cb-9df7-4a63-b5ba-15626ce5da34" containerName="sg-core" containerID="cri-o://29a316a3d2ec1b0b8f727dc20a68dc28d606e66a37af5188c0ab2a7de6fdfa8e" gracePeriod=30 Dec 12 16:14:37 crc kubenswrapper[4693]: I1212 16:14:37.843419 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a88e6cb-9df7-4a63-b5ba-15626ce5da34" containerName="proxy-httpd" containerID="cri-o://772bad77cf960e5fedb9a3aa2dfde9537abedc50eb3d70aa6d33033cf032934b" gracePeriod=30 Dec 12 16:14:37 crc kubenswrapper[4693]: I1212 16:14:37.843579 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a88e6cb-9df7-4a63-b5ba-15626ce5da34" containerName="ceilometer-notification-agent" containerID="cri-o://ff8558d3c3230763aca363b2fcf034e5931dc486704ec2d1f06a4658364a2316" gracePeriod=30 Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.447752 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dgtln" Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.474940 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55ff1574-a937-497e-9e7a-0589ee9732bf-scripts\") pod \"55ff1574-a937-497e-9e7a-0589ee9732bf\" (UID: \"55ff1574-a937-497e-9e7a-0589ee9732bf\") " Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.476148 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwzc5\" (UniqueName: \"kubernetes.io/projected/55ff1574-a937-497e-9e7a-0589ee9732bf-kube-api-access-nwzc5\") pod \"55ff1574-a937-497e-9e7a-0589ee9732bf\" (UID: \"55ff1574-a937-497e-9e7a-0589ee9732bf\") " Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.476229 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55ff1574-a937-497e-9e7a-0589ee9732bf-config-data\") pod \"55ff1574-a937-497e-9e7a-0589ee9732bf\" (UID: \"55ff1574-a937-497e-9e7a-0589ee9732bf\") " Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.476423 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55ff1574-a937-497e-9e7a-0589ee9732bf-combined-ca-bundle\") pod \"55ff1574-a937-497e-9e7a-0589ee9732bf\" (UID: \"55ff1574-a937-497e-9e7a-0589ee9732bf\") " Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.490064 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55ff1574-a937-497e-9e7a-0589ee9732bf-scripts" (OuterVolumeSpecName: "scripts") pod "55ff1574-a937-497e-9e7a-0589ee9732bf" (UID: "55ff1574-a937-497e-9e7a-0589ee9732bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.497800 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55ff1574-a937-497e-9e7a-0589ee9732bf-kube-api-access-nwzc5" (OuterVolumeSpecName: "kube-api-access-nwzc5") pod "55ff1574-a937-497e-9e7a-0589ee9732bf" (UID: "55ff1574-a937-497e-9e7a-0589ee9732bf"). InnerVolumeSpecName "kube-api-access-nwzc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.510974 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55ff1574-a937-497e-9e7a-0589ee9732bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55ff1574-a937-497e-9e7a-0589ee9732bf" (UID: "55ff1574-a937-497e-9e7a-0589ee9732bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.512441 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55ff1574-a937-497e-9e7a-0589ee9732bf-config-data" (OuterVolumeSpecName: "config-data") pod "55ff1574-a937-497e-9e7a-0589ee9732bf" (UID: "55ff1574-a937-497e-9e7a-0589ee9732bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.581327 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55ff1574-a937-497e-9e7a-0589ee9732bf-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.581369 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwzc5\" (UniqueName: \"kubernetes.io/projected/55ff1574-a937-497e-9e7a-0589ee9732bf-kube-api-access-nwzc5\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.581386 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55ff1574-a937-497e-9e7a-0589ee9732bf-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.581404 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55ff1574-a937-497e-9e7a-0589ee9732bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.947673 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dgtln" Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.947687 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dgtln" event={"ID":"55ff1574-a937-497e-9e7a-0589ee9732bf","Type":"ContainerDied","Data":"c6ea0cc0074847d5c9bb28805757e5d97885e8bfadd88370e9143738d71e88b5"} Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.948068 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6ea0cc0074847d5c9bb28805757e5d97885e8bfadd88370e9143738d71e88b5" Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.950901 4693 generic.go:334] "Generic (PLEG): container finished" podID="0a88e6cb-9df7-4a63-b5ba-15626ce5da34" containerID="772bad77cf960e5fedb9a3aa2dfde9537abedc50eb3d70aa6d33033cf032934b" exitCode=0 Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.950937 4693 generic.go:334] "Generic (PLEG): container finished" podID="0a88e6cb-9df7-4a63-b5ba-15626ce5da34" containerID="29a316a3d2ec1b0b8f727dc20a68dc28d606e66a37af5188c0ab2a7de6fdfa8e" exitCode=2 Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.950948 4693 generic.go:334] "Generic (PLEG): container finished" podID="0a88e6cb-9df7-4a63-b5ba-15626ce5da34" containerID="ff8558d3c3230763aca363b2fcf034e5931dc486704ec2d1f06a4658364a2316" exitCode=0 Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.950957 4693 generic.go:334] "Generic (PLEG): container finished" podID="0a88e6cb-9df7-4a63-b5ba-15626ce5da34" containerID="45c79f7d9f1fb15fc31ae2bf45815d3d4f68c5f7cacab5c522307bc02812bebb" exitCode=0 Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.951002 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a88e6cb-9df7-4a63-b5ba-15626ce5da34","Type":"ContainerDied","Data":"772bad77cf960e5fedb9a3aa2dfde9537abedc50eb3d70aa6d33033cf032934b"} Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.951030 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a88e6cb-9df7-4a63-b5ba-15626ce5da34","Type":"ContainerDied","Data":"29a316a3d2ec1b0b8f727dc20a68dc28d606e66a37af5188c0ab2a7de6fdfa8e"} Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.951041 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a88e6cb-9df7-4a63-b5ba-15626ce5da34","Type":"ContainerDied","Data":"ff8558d3c3230763aca363b2fcf034e5931dc486704ec2d1f06a4658364a2316"} Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.951052 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a88e6cb-9df7-4a63-b5ba-15626ce5da34","Type":"ContainerDied","Data":"45c79f7d9f1fb15fc31ae2bf45815d3d4f68c5f7cacab5c522307bc02812bebb"} Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.956369 4693 generic.go:334] "Generic (PLEG): container finished" podID="7cdcade9-a317-47ba-a03c-0c355c06e306" containerID="02bd07d9e3f2b641563c77bf1d94bc6c73e2168b2c22db3470765a72db2401b8" exitCode=0 Dec 12 16:14:38 crc kubenswrapper[4693]: I1212 16:14:38.956411 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7cdcade9-a317-47ba-a03c-0c355c06e306","Type":"ContainerDied","Data":"02bd07d9e3f2b641563c77bf1d94bc6c73e2168b2c22db3470765a72db2401b8"} Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.178822 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.198233 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-sg-core-conf-yaml\") pod \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.198344 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-combined-ca-bundle\") pod \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.198432 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-config-data\") pod \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.198515 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-run-httpd\") pod \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.198560 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49nc6\" (UniqueName: \"kubernetes.io/projected/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-kube-api-access-49nc6\") pod \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.198744 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-log-httpd\") pod \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.198789 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-scripts\") pod \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\" (UID: \"0a88e6cb-9df7-4a63-b5ba-15626ce5da34\") " Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.208685 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0a88e6cb-9df7-4a63-b5ba-15626ce5da34" (UID: "0a88e6cb-9df7-4a63-b5ba-15626ce5da34"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.208946 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0a88e6cb-9df7-4a63-b5ba-15626ce5da34" (UID: "0a88e6cb-9df7-4a63-b5ba-15626ce5da34"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.261500 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-scripts" (OuterVolumeSpecName: "scripts") pod "0a88e6cb-9df7-4a63-b5ba-15626ce5da34" (UID: "0a88e6cb-9df7-4a63-b5ba-15626ce5da34"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.262484 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-kube-api-access-49nc6" (OuterVolumeSpecName: "kube-api-access-49nc6") pod "0a88e6cb-9df7-4a63-b5ba-15626ce5da34" (UID: "0a88e6cb-9df7-4a63-b5ba-15626ce5da34"). InnerVolumeSpecName "kube-api-access-49nc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.306951 4693 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.306993 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.307005 4693 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.307014 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49nc6\" (UniqueName: \"kubernetes.io/projected/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-kube-api-access-49nc6\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.324700 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 12 16:14:39 crc kubenswrapper[4693]: E1212 16:14:39.325301 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a88e6cb-9df7-4a63-b5ba-15626ce5da34" containerName="ceilometer-notification-agent" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.325316 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a88e6cb-9df7-4a63-b5ba-15626ce5da34" containerName="ceilometer-notification-agent" Dec 12 16:14:39 crc kubenswrapper[4693]: E1212 16:14:39.325331 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a88e6cb-9df7-4a63-b5ba-15626ce5da34" containerName="ceilometer-central-agent" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.325339 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a88e6cb-9df7-4a63-b5ba-15626ce5da34" containerName="ceilometer-central-agent" Dec 12 16:14:39 crc kubenswrapper[4693]: E1212 16:14:39.325375 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a88e6cb-9df7-4a63-b5ba-15626ce5da34" containerName="proxy-httpd" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.325381 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a88e6cb-9df7-4a63-b5ba-15626ce5da34" containerName="proxy-httpd" Dec 12 16:14:39 crc kubenswrapper[4693]: E1212 16:14:39.325398 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a88e6cb-9df7-4a63-b5ba-15626ce5da34" containerName="sg-core" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.325404 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a88e6cb-9df7-4a63-b5ba-15626ce5da34" containerName="sg-core" Dec 12 16:14:39 crc kubenswrapper[4693]: E1212 16:14:39.325447 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55ff1574-a937-497e-9e7a-0589ee9732bf" containerName="nova-cell0-conductor-db-sync" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.325453 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="55ff1574-a937-497e-9e7a-0589ee9732bf" containerName="nova-cell0-conductor-db-sync" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.325690 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="55ff1574-a937-497e-9e7a-0589ee9732bf" containerName="nova-cell0-conductor-db-sync" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.325709 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a88e6cb-9df7-4a63-b5ba-15626ce5da34" containerName="sg-core" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.325722 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a88e6cb-9df7-4a63-b5ba-15626ce5da34" containerName="ceilometer-central-agent" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.325730 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a88e6cb-9df7-4a63-b5ba-15626ce5da34" containerName="proxy-httpd" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.325761 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a88e6cb-9df7-4a63-b5ba-15626ce5da34" containerName="ceilometer-notification-agent" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.326624 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.337003 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5wxvb" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.337597 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.379456 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0a88e6cb-9df7-4a63-b5ba-15626ce5da34" (UID: "0a88e6cb-9df7-4a63-b5ba-15626ce5da34"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.409039 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw85r\" (UniqueName: \"kubernetes.io/projected/27f05709-09cb-455a-a0af-abfb40ebeb04-kube-api-access-kw85r\") pod \"nova-cell0-conductor-0\" (UID: \"27f05709-09cb-455a-a0af-abfb40ebeb04\") " pod="openstack/nova-cell0-conductor-0" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.409137 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f05709-09cb-455a-a0af-abfb40ebeb04-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"27f05709-09cb-455a-a0af-abfb40ebeb04\") " pod="openstack/nova-cell0-conductor-0" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.409192 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27f05709-09cb-455a-a0af-abfb40ebeb04-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"27f05709-09cb-455a-a0af-abfb40ebeb04\") " pod="openstack/nova-cell0-conductor-0" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.409374 4693 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.412250 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.449734 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.467510 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a88e6cb-9df7-4a63-b5ba-15626ce5da34" (UID: "0a88e6cb-9df7-4a63-b5ba-15626ce5da34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.511490 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw85r\" (UniqueName: \"kubernetes.io/projected/27f05709-09cb-455a-a0af-abfb40ebeb04-kube-api-access-kw85r\") pod \"nova-cell0-conductor-0\" (UID: \"27f05709-09cb-455a-a0af-abfb40ebeb04\") " pod="openstack/nova-cell0-conductor-0" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.511563 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f05709-09cb-455a-a0af-abfb40ebeb04-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"27f05709-09cb-455a-a0af-abfb40ebeb04\") " pod="openstack/nova-cell0-conductor-0" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.511623 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27f05709-09cb-455a-a0af-abfb40ebeb04-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"27f05709-09cb-455a-a0af-abfb40ebeb04\") " pod="openstack/nova-cell0-conductor-0" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.511711 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.518351 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f05709-09cb-455a-a0af-abfb40ebeb04-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"27f05709-09cb-455a-a0af-abfb40ebeb04\") " pod="openstack/nova-cell0-conductor-0" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.518853 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27f05709-09cb-455a-a0af-abfb40ebeb04-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"27f05709-09cb-455a-a0af-abfb40ebeb04\") " pod="openstack/nova-cell0-conductor-0" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.538967 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw85r\" (UniqueName: \"kubernetes.io/projected/27f05709-09cb-455a-a0af-abfb40ebeb04-kube-api-access-kw85r\") pod \"nova-cell0-conductor-0\" (UID: \"27f05709-09cb-455a-a0af-abfb40ebeb04\") " pod="openstack/nova-cell0-conductor-0" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.540225 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-config-data" (OuterVolumeSpecName: "config-data") pod "0a88e6cb-9df7-4a63-b5ba-15626ce5da34" (UID: "0a88e6cb-9df7-4a63-b5ba-15626ce5da34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.612999 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cdcade9-a317-47ba-a03c-0c355c06e306-httpd-run\") pod \"7cdcade9-a317-47ba-a03c-0c355c06e306\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.613058 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-public-tls-certs\") pod \"7cdcade9-a317-47ba-a03c-0c355c06e306\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.613101 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cdcade9-a317-47ba-a03c-0c355c06e306-logs\") pod \"7cdcade9-a317-47ba-a03c-0c355c06e306\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.613726 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cdcade9-a317-47ba-a03c-0c355c06e306-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7cdcade9-a317-47ba-a03c-0c355c06e306" (UID: "7cdcade9-a317-47ba-a03c-0c355c06e306"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.613804 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cdcade9-a317-47ba-a03c-0c355c06e306-logs" (OuterVolumeSpecName: "logs") pod "7cdcade9-a317-47ba-a03c-0c355c06e306" (UID: "7cdcade9-a317-47ba-a03c-0c355c06e306"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.613883 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\") pod \"7cdcade9-a317-47ba-a03c-0c355c06e306\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.614145 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-config-data\") pod \"7cdcade9-a317-47ba-a03c-0c355c06e306\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.614207 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-scripts\") pod \"7cdcade9-a317-47ba-a03c-0c355c06e306\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.614298 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lncvz\" (UniqueName: \"kubernetes.io/projected/7cdcade9-a317-47ba-a03c-0c355c06e306-kube-api-access-lncvz\") pod \"7cdcade9-a317-47ba-a03c-0c355c06e306\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.614356 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-combined-ca-bundle\") pod \"7cdcade9-a317-47ba-a03c-0c355c06e306\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.615076 4693 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cdcade9-a317-47ba-a03c-0c355c06e306-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.615098 4693 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cdcade9-a317-47ba-a03c-0c355c06e306-logs\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.615110 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a88e6cb-9df7-4a63-b5ba-15626ce5da34-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.617905 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cdcade9-a317-47ba-a03c-0c355c06e306-kube-api-access-lncvz" (OuterVolumeSpecName: "kube-api-access-lncvz") pod "7cdcade9-a317-47ba-a03c-0c355c06e306" (UID: "7cdcade9-a317-47ba-a03c-0c355c06e306"). InnerVolumeSpecName "kube-api-access-lncvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.625059 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-scripts" (OuterVolumeSpecName: "scripts") pod "7cdcade9-a317-47ba-a03c-0c355c06e306" (UID: "7cdcade9-a317-47ba-a03c-0c355c06e306"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.642069 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d" (OuterVolumeSpecName: "glance") pod "7cdcade9-a317-47ba-a03c-0c355c06e306" (UID: "7cdcade9-a317-47ba-a03c-0c355c06e306"). InnerVolumeSpecName "pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.659156 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cdcade9-a317-47ba-a03c-0c355c06e306" (UID: "7cdcade9-a317-47ba-a03c-0c355c06e306"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:39 crc kubenswrapper[4693]: E1212 16:14:39.676740 4693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-public-tls-certs podName:7cdcade9-a317-47ba-a03c-0c355c06e306 nodeName:}" failed. No retries permitted until 2025-12-12 16:14:40.176704917 +0000 UTC m=+1707.345344518 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "public-tls-certs" (UniqueName: "kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-public-tls-certs") pod "7cdcade9-a317-47ba-a03c-0c355c06e306" (UID: "7cdcade9-a317-47ba-a03c-0c355c06e306") : error deleting /var/lib/kubelet/pods/7cdcade9-a317-47ba-a03c-0c355c06e306/volume-subpaths: remove /var/lib/kubelet/pods/7cdcade9-a317-47ba-a03c-0c355c06e306/volume-subpaths: no such file or directory Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.679523 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-config-data" (OuterVolumeSpecName: "config-data") pod "7cdcade9-a317-47ba-a03c-0c355c06e306" (UID: "7cdcade9-a317-47ba-a03c-0c355c06e306"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.717375 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.717428 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.717441 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lncvz\" (UniqueName: \"kubernetes.io/projected/7cdcade9-a317-47ba-a03c-0c355c06e306-kube-api-access-lncvz\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.717456 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.717501 4693 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\") on node \"crc\" " Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.726152 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.745525 4693 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.745693 4693 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d") on node "crc" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.820123 4693 reconciler_common.go:293] "Volume detached for volume \"pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.975912 4693 generic.go:334] "Generic (PLEG): container finished" podID="ad919c71-0957-489d-8ae6-a69c33ab65b5" containerID="43597875dcb20f9421d48d134653503b2bd790db4ff608b14535d56ad43af9ea" exitCode=0 Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.976011 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ad919c71-0957-489d-8ae6-a69c33ab65b5","Type":"ContainerDied","Data":"43597875dcb20f9421d48d134653503b2bd790db4ff608b14535d56ad43af9ea"} Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.982371 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7cdcade9-a317-47ba-a03c-0c355c06e306","Type":"ContainerDied","Data":"7d53f444a4aa3c963d36827bd360b2e4dba9e9cdc56e80f695f2c5f884bb092a"} Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.982424 4693 scope.go:117] "RemoveContainer" containerID="02bd07d9e3f2b641563c77bf1d94bc6c73e2168b2c22db3470765a72db2401b8" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.982578 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.990368 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a88e6cb-9df7-4a63-b5ba-15626ce5da34","Type":"ContainerDied","Data":"36fcc3694b535bc476413f3a3b3f37ab44d8e50bb8174dbaa46ad13fb66f69a8"} Dec 12 16:14:39 crc kubenswrapper[4693]: I1212 16:14:39.990489 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.082368 4693 scope.go:117] "RemoveContainer" containerID="204379f4d155ff5433acf100874cb455bd1186ef6cfeb539280d401f0ff54414" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.105701 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.138513 4693 scope.go:117] "RemoveContainer" containerID="772bad77cf960e5fedb9a3aa2dfde9537abedc50eb3d70aa6d33033cf032934b" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.161350 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.185357 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:40 crc kubenswrapper[4693]: E1212 16:14:40.185944 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cdcade9-a317-47ba-a03c-0c355c06e306" containerName="glance-httpd" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.185962 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cdcade9-a317-47ba-a03c-0c355c06e306" containerName="glance-httpd" Dec 12 16:14:40 crc kubenswrapper[4693]: E1212 16:14:40.186013 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cdcade9-a317-47ba-a03c-0c355c06e306" containerName="glance-log" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.186020 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cdcade9-a317-47ba-a03c-0c355c06e306" containerName="glance-log" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.187221 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cdcade9-a317-47ba-a03c-0c355c06e306" containerName="glance-log" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.187258 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cdcade9-a317-47ba-a03c-0c355c06e306" containerName="glance-httpd" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.190735 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.194745 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.195018 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.203702 4693 scope.go:117] "RemoveContainer" containerID="29a316a3d2ec1b0b8f727dc20a68dc28d606e66a37af5188c0ab2a7de6fdfa8e" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.205950 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.236251 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-public-tls-certs\") pod \"7cdcade9-a317-47ba-a03c-0c355c06e306\" (UID: \"7cdcade9-a317-47ba-a03c-0c355c06e306\") " Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.236730 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f145842b-33ac-4d05-b482-706a9cedb695-run-httpd\") pod \"ceilometer-0\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.236770 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-config-data\") pod \"ceilometer-0\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.236870 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46mrl\" (UniqueName: \"kubernetes.io/projected/f145842b-33ac-4d05-b482-706a9cedb695-kube-api-access-46mrl\") pod \"ceilometer-0\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.236935 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-scripts\") pod \"ceilometer-0\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.236959 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f145842b-33ac-4d05-b482-706a9cedb695-log-httpd\") pod \"ceilometer-0\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.236983 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.237034 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.237893 4693 scope.go:117] "RemoveContainer" containerID="ff8558d3c3230763aca363b2fcf034e5931dc486704ec2d1f06a4658364a2316" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.269310 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.287868 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7cdcade9-a317-47ba-a03c-0c355c06e306" (UID: "7cdcade9-a317-47ba-a03c-0c355c06e306"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.337801 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-scripts\") pod \"ceilometer-0\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.337835 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f145842b-33ac-4d05-b482-706a9cedb695-log-httpd\") pod \"ceilometer-0\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.337855 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.337891 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.337975 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f145842b-33ac-4d05-b482-706a9cedb695-run-httpd\") pod \"ceilometer-0\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.337991 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-config-data\") pod \"ceilometer-0\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.338055 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46mrl\" (UniqueName: \"kubernetes.io/projected/f145842b-33ac-4d05-b482-706a9cedb695-kube-api-access-46mrl\") pod \"ceilometer-0\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.338125 4693 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cdcade9-a317-47ba-a03c-0c355c06e306-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.338679 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f145842b-33ac-4d05-b482-706a9cedb695-run-httpd\") pod \"ceilometer-0\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.340501 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f145842b-33ac-4d05-b482-706a9cedb695-log-httpd\") pod \"ceilometer-0\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.341440 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.342770 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-config-data\") pod \"ceilometer-0\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.343566 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-scripts\") pod \"ceilometer-0\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.344091 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.357287 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46mrl\" (UniqueName: \"kubernetes.io/projected/f145842b-33ac-4d05-b482-706a9cedb695-kube-api-access-46mrl\") pod \"ceilometer-0\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.434007 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.439167 4693 scope.go:117] "RemoveContainer" containerID="45c79f7d9f1fb15fc31ae2bf45815d3d4f68c5f7cacab5c522307bc02812bebb" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.440727 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzdnw\" (UniqueName: \"kubernetes.io/projected/ad919c71-0957-489d-8ae6-a69c33ab65b5-kube-api-access-rzdnw\") pod \"ad919c71-0957-489d-8ae6-a69c33ab65b5\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.440781 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-internal-tls-certs\") pod \"ad919c71-0957-489d-8ae6-a69c33ab65b5\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.440812 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ad919c71-0957-489d-8ae6-a69c33ab65b5-httpd-run\") pod \"ad919c71-0957-489d-8ae6-a69c33ab65b5\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.441691 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad919c71-0957-489d-8ae6-a69c33ab65b5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ad919c71-0957-489d-8ae6-a69c33ab65b5" (UID: "ad919c71-0957-489d-8ae6-a69c33ab65b5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.442001 4693 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ad919c71-0957-489d-8ae6-a69c33ab65b5-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.471560 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad919c71-0957-489d-8ae6-a69c33ab65b5-kube-api-access-rzdnw" (OuterVolumeSpecName: "kube-api-access-rzdnw") pod "ad919c71-0957-489d-8ae6-a69c33ab65b5" (UID: "ad919c71-0957-489d-8ae6-a69c33ab65b5"). InnerVolumeSpecName "kube-api-access-rzdnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.529054 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.543347 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-config-data\") pod \"ad919c71-0957-489d-8ae6-a69c33ab65b5\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.543439 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad919c71-0957-489d-8ae6-a69c33ab65b5-logs\") pod \"ad919c71-0957-489d-8ae6-a69c33ab65b5\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.548673 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad919c71-0957-489d-8ae6-a69c33ab65b5-logs" (OuterVolumeSpecName: "logs") pod "ad919c71-0957-489d-8ae6-a69c33ab65b5" (UID: "ad919c71-0957-489d-8ae6-a69c33ab65b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.593404 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ad919c71-0957-489d-8ae6-a69c33ab65b5" (UID: "ad919c71-0957-489d-8ae6-a69c33ab65b5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.644406 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.655369 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\") pod \"ad919c71-0957-489d-8ae6-a69c33ab65b5\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.655429 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-scripts\") pod \"ad919c71-0957-489d-8ae6-a69c33ab65b5\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.655589 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-combined-ca-bundle\") pod \"ad919c71-0957-489d-8ae6-a69c33ab65b5\" (UID: \"ad919c71-0957-489d-8ae6-a69c33ab65b5\") " Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.656648 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzdnw\" (UniqueName: \"kubernetes.io/projected/ad919c71-0957-489d-8ae6-a69c33ab65b5-kube-api-access-rzdnw\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.656669 4693 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.656680 4693 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad919c71-0957-489d-8ae6-a69c33ab65b5-logs\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.687023 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-scripts" (OuterVolumeSpecName: "scripts") pod "ad919c71-0957-489d-8ae6-a69c33ab65b5" (UID: "ad919c71-0957-489d-8ae6-a69c33ab65b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.716099 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-config-data" (OuterVolumeSpecName: "config-data") pod "ad919c71-0957-489d-8ae6-a69c33ab65b5" (UID: "ad919c71-0957-489d-8ae6-a69c33ab65b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.731540 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596" (OuterVolumeSpecName: "glance") pod "ad919c71-0957-489d-8ae6-a69c33ab65b5" (UID: "ad919c71-0957-489d-8ae6-a69c33ab65b5"). InnerVolumeSpecName "pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.750808 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.755073 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad919c71-0957-489d-8ae6-a69c33ab65b5" (UID: "ad919c71-0957-489d-8ae6-a69c33ab65b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.763088 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.763346 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.763415 4693 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\") on node \"crc\" " Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.763430 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.763440 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad919c71-0957-489d-8ae6-a69c33ab65b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:40 crc kubenswrapper[4693]: E1212 16:14:40.763833 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad919c71-0957-489d-8ae6-a69c33ab65b5" containerName="glance-log" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.763848 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad919c71-0957-489d-8ae6-a69c33ab65b5" containerName="glance-log" Dec 12 16:14:40 crc kubenswrapper[4693]: E1212 16:14:40.763889 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad919c71-0957-489d-8ae6-a69c33ab65b5" containerName="glance-httpd" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.763895 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad919c71-0957-489d-8ae6-a69c33ab65b5" containerName="glance-httpd" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.764139 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad919c71-0957-489d-8ae6-a69c33ab65b5" containerName="glance-log" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.764170 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad919c71-0957-489d-8ae6-a69c33ab65b5" containerName="glance-httpd" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.765665 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.770806 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.770724 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.780911 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.819824 4693 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.819988 4693 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596") on node "crc" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.866193 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c7b2a3-7a48-4d97-ad1f-e828567a3770-config-data\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.866238 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c7b2a3-7a48-4d97-ad1f-e828567a3770-logs\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.866355 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.866378 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57c7b2a3-7a48-4d97-ad1f-e828567a3770-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.866400 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9kk6\" (UniqueName: \"kubernetes.io/projected/57c7b2a3-7a48-4d97-ad1f-e828567a3770-kube-api-access-x9kk6\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.866452 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57c7b2a3-7a48-4d97-ad1f-e828567a3770-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.866470 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c7b2a3-7a48-4d97-ad1f-e828567a3770-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.866534 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57c7b2a3-7a48-4d97-ad1f-e828567a3770-scripts\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.866589 4693 reconciler_common.go:293] "Volume detached for volume \"pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.968412 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.968730 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57c7b2a3-7a48-4d97-ad1f-e828567a3770-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.968771 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9kk6\" (UniqueName: \"kubernetes.io/projected/57c7b2a3-7a48-4d97-ad1f-e828567a3770-kube-api-access-x9kk6\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.968847 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57c7b2a3-7a48-4d97-ad1f-e828567a3770-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.968875 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c7b2a3-7a48-4d97-ad1f-e828567a3770-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.968971 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57c7b2a3-7a48-4d97-ad1f-e828567a3770-scripts\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.969027 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c7b2a3-7a48-4d97-ad1f-e828567a3770-config-data\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.969048 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c7b2a3-7a48-4d97-ad1f-e828567a3770-logs\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.969428 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57c7b2a3-7a48-4d97-ad1f-e828567a3770-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.969554 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c7b2a3-7a48-4d97-ad1f-e828567a3770-logs\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.971883 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.972006 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/45099b66416d1862b963c1d5cb75f06870617309ef888c0d9e553da7cdf42994/globalmount\"" pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.973680 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57c7b2a3-7a48-4d97-ad1f-e828567a3770-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.973900 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57c7b2a3-7a48-4d97-ad1f-e828567a3770-scripts\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.975633 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c7b2a3-7a48-4d97-ad1f-e828567a3770-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.977310 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c7b2a3-7a48-4d97-ad1f-e828567a3770-config-data\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:40 crc kubenswrapper[4693]: I1212 16:14:40.993130 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9kk6\" (UniqueName: \"kubernetes.io/projected/57c7b2a3-7a48-4d97-ad1f-e828567a3770-kube-api-access-x9kk6\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.003950 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"27f05709-09cb-455a-a0af-abfb40ebeb04","Type":"ContainerStarted","Data":"80f40386314a6feec805851e5a99636f89c7cc6cfd686ced1d7657bb83f5eed1"} Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.010032 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ad919c71-0957-489d-8ae6-a69c33ab65b5","Type":"ContainerDied","Data":"97b1668bc7bb34b733b72da4cff9f132dd4841e00ecdb107aac797c0f28f2590"} Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.010204 4693 scope.go:117] "RemoveContainer" containerID="43597875dcb20f9421d48d134653503b2bd790db4ff608b14535d56ad43af9ea" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.010751 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.027055 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2db87ee0-b26e-48a3-b434-7900fa644a0d\") pod \"glance-default-external-api-0\" (UID: \"57c7b2a3-7a48-4d97-ad1f-e828567a3770\") " pod="openstack/glance-default-external-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.085473 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.095926 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.113007 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.125762 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.128285 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.132181 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.137108 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.158466 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.172717 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.276780 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec45a9c1-dd3b-4039-8c50-7c797defe34a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.276896 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6gc5\" (UniqueName: \"kubernetes.io/projected/ec45a9c1-dd3b-4039-8c50-7c797defe34a-kube-api-access-b6gc5\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.276938 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec45a9c1-dd3b-4039-8c50-7c797defe34a-logs\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.276959 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec45a9c1-dd3b-4039-8c50-7c797defe34a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.277057 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec45a9c1-dd3b-4039-8c50-7c797defe34a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.277089 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ec45a9c1-dd3b-4039-8c50-7c797defe34a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.277133 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec45a9c1-dd3b-4039-8c50-7c797defe34a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.277167 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.291303 4693 scope.go:117] "RemoveContainer" containerID="28cff692fc6b5e1d81fb23e9cf6c98205139fbbf7d10adf5042d2792df004e4b" Dec 12 16:14:41 crc kubenswrapper[4693]: W1212 16:14:41.299602 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf145842b_33ac_4d05_b482_706a9cedb695.slice/crio-fdcaa6ca7117162a4481722edefdfeea0f030ab739e6136b8f9dfe53ec2e26d8 WatchSource:0}: Error finding container fdcaa6ca7117162a4481722edefdfeea0f030ab739e6136b8f9dfe53ec2e26d8: Status 404 returned error can't find the container with id fdcaa6ca7117162a4481722edefdfeea0f030ab739e6136b8f9dfe53ec2e26d8 Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.374125 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a88e6cb-9df7-4a63-b5ba-15626ce5da34" path="/var/lib/kubelet/pods/0a88e6cb-9df7-4a63-b5ba-15626ce5da34/volumes" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.376076 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cdcade9-a317-47ba-a03c-0c355c06e306" path="/var/lib/kubelet/pods/7cdcade9-a317-47ba-a03c-0c355c06e306/volumes" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.377448 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad919c71-0957-489d-8ae6-a69c33ab65b5" path="/var/lib/kubelet/pods/ad919c71-0957-489d-8ae6-a69c33ab65b5/volumes" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.386755 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec45a9c1-dd3b-4039-8c50-7c797defe34a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.386837 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6gc5\" (UniqueName: \"kubernetes.io/projected/ec45a9c1-dd3b-4039-8c50-7c797defe34a-kube-api-access-b6gc5\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.386873 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec45a9c1-dd3b-4039-8c50-7c797defe34a-logs\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.386891 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec45a9c1-dd3b-4039-8c50-7c797defe34a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.386947 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec45a9c1-dd3b-4039-8c50-7c797defe34a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.386969 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ec45a9c1-dd3b-4039-8c50-7c797defe34a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.386993 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec45a9c1-dd3b-4039-8c50-7c797defe34a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.387014 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.388597 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ec45a9c1-dd3b-4039-8c50-7c797defe34a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.388807 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec45a9c1-dd3b-4039-8c50-7c797defe34a-logs\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.391082 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec45a9c1-dd3b-4039-8c50-7c797defe34a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.393436 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec45a9c1-dd3b-4039-8c50-7c797defe34a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.394508 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec45a9c1-dd3b-4039-8c50-7c797defe34a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.397561 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec45a9c1-dd3b-4039-8c50-7c797defe34a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.399793 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.399833 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/50f66c90a996e74d8c468f9c70fbfefea786aa10bcb017475eedefeec7499784/globalmount\"" pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.407106 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6gc5\" (UniqueName: \"kubernetes.io/projected/ec45a9c1-dd3b-4039-8c50-7c797defe34a-kube-api-access-b6gc5\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.469931 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58f7d2b5-19cb-4f90-a469-af09ff98f596\") pod \"glance-default-internal-api-0\" (UID: \"ec45a9c1-dd3b-4039-8c50-7c797defe34a\") " pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.762406 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 12 16:14:41 crc kubenswrapper[4693]: I1212 16:14:41.961858 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 16:14:42 crc kubenswrapper[4693]: I1212 16:14:42.043796 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"57c7b2a3-7a48-4d97-ad1f-e828567a3770","Type":"ContainerStarted","Data":"e771442dcc2261b1914fab8d6a22adfb6ebb76692a30933722ac782d84515b48"} Dec 12 16:14:42 crc kubenswrapper[4693]: I1212 16:14:42.050054 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"27f05709-09cb-455a-a0af-abfb40ebeb04","Type":"ContainerStarted","Data":"c8eda5b8e84e038eae2c89393d8f532b9295c1602b10ed2a6114f250d652771a"} Dec 12 16:14:42 crc kubenswrapper[4693]: I1212 16:14:42.050811 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 12 16:14:42 crc kubenswrapper[4693]: I1212 16:14:42.055399 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f145842b-33ac-4d05-b482-706a9cedb695","Type":"ContainerStarted","Data":"fdcaa6ca7117162a4481722edefdfeea0f030ab739e6136b8f9dfe53ec2e26d8"} Dec 12 16:14:42 crc kubenswrapper[4693]: I1212 16:14:42.084720 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=3.084695796 podStartE2EDuration="3.084695796s" podCreationTimestamp="2025-12-12 16:14:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:14:42.069703711 +0000 UTC m=+1709.238343312" watchObservedRunningTime="2025-12-12 16:14:42.084695796 +0000 UTC m=+1709.253335417" Dec 12 16:14:42 crc kubenswrapper[4693]: I1212 16:14:42.580880 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 16:14:42 crc kubenswrapper[4693]: W1212 16:14:42.589410 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec45a9c1_dd3b_4039_8c50_7c797defe34a.slice/crio-3737c561efc4cde75d225ad6551e1456840958b11c053396c224ccd0cb4c41d4 WatchSource:0}: Error finding container 3737c561efc4cde75d225ad6551e1456840958b11c053396c224ccd0cb4c41d4: Status 404 returned error can't find the container with id 3737c561efc4cde75d225ad6551e1456840958b11c053396c224ccd0cb4c41d4 Dec 12 16:14:43 crc kubenswrapper[4693]: I1212 16:14:43.073893 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f145842b-33ac-4d05-b482-706a9cedb695","Type":"ContainerStarted","Data":"6822b9c4d5daf443ad07e167f7d0e2cc41dd18508802a6f32544a3d288408bc1"} Dec 12 16:14:43 crc kubenswrapper[4693]: I1212 16:14:43.075608 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ec45a9c1-dd3b-4039-8c50-7c797defe34a","Type":"ContainerStarted","Data":"3737c561efc4cde75d225ad6551e1456840958b11c053396c224ccd0cb4c41d4"} Dec 12 16:14:43 crc kubenswrapper[4693]: I1212 16:14:43.078462 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"57c7b2a3-7a48-4d97-ad1f-e828567a3770","Type":"ContainerStarted","Data":"6d5f63f00ed2550a521f807b4563e484fc7d935fda9a10a54a8464a1e6e06234"} Dec 12 16:14:44 crc kubenswrapper[4693]: I1212 16:14:44.093021 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f145842b-33ac-4d05-b482-706a9cedb695","Type":"ContainerStarted","Data":"decc1783d7ef63470404f955d978ddb45a77e3c44e3f8ebd15ceab5c4dc8d3c7"} Dec 12 16:14:44 crc kubenswrapper[4693]: I1212 16:14:44.095874 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ec45a9c1-dd3b-4039-8c50-7c797defe34a","Type":"ContainerStarted","Data":"0bce3c591df86def344008557c86db748d0415351d164b91fe22deb6d62319db"} Dec 12 16:14:44 crc kubenswrapper[4693]: I1212 16:14:44.097539 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"57c7b2a3-7a48-4d97-ad1f-e828567a3770","Type":"ContainerStarted","Data":"64355fc097e7e0e2959a41d5da051c6b9d0e4038cb45742e45d478c82a59d004"} Dec 12 16:14:44 crc kubenswrapper[4693]: I1212 16:14:44.137006 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.136956716 podStartE2EDuration="4.136956716s" podCreationTimestamp="2025-12-12 16:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:14:44.117743517 +0000 UTC m=+1711.286383118" watchObservedRunningTime="2025-12-12 16:14:44.136956716 +0000 UTC m=+1711.305596317" Dec 12 16:14:44 crc kubenswrapper[4693]: I1212 16:14:44.358496 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:14:44 crc kubenswrapper[4693]: E1212 16:14:44.359111 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:14:45 crc kubenswrapper[4693]: I1212 16:14:45.110701 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f145842b-33ac-4d05-b482-706a9cedb695","Type":"ContainerStarted","Data":"4a31cf28805453c7af7c48a1a570e4a1721332c5c72985be48ada93cee1c10db"} Dec 12 16:14:45 crc kubenswrapper[4693]: I1212 16:14:45.114743 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ec45a9c1-dd3b-4039-8c50-7c797defe34a","Type":"ContainerStarted","Data":"a724f5d66b25f0a8f0932b5197f484b570463f3e8ad4171dc62c8c1e273c3c6e"} Dec 12 16:14:45 crc kubenswrapper[4693]: I1212 16:14:45.139737 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.139717185 podStartE2EDuration="4.139717185s" podCreationTimestamp="2025-12-12 16:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:14:45.134206466 +0000 UTC m=+1712.302846077" watchObservedRunningTime="2025-12-12 16:14:45.139717185 +0000 UTC m=+1712.308356786" Dec 12 16:14:46 crc kubenswrapper[4693]: I1212 16:14:46.126057 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f145842b-33ac-4d05-b482-706a9cedb695","Type":"ContainerStarted","Data":"b92e0bd87bad16f893929f322254f4745634d6327681fea4d798e11f89a17487"} Dec 12 16:14:46 crc kubenswrapper[4693]: I1212 16:14:46.191510 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.703719377 podStartE2EDuration="6.191488979s" podCreationTimestamp="2025-12-12 16:14:40 +0000 UTC" firstStartedPulling="2025-12-12 16:14:41.330195609 +0000 UTC m=+1708.498835210" lastFinishedPulling="2025-12-12 16:14:45.817965211 +0000 UTC m=+1712.986604812" observedRunningTime="2025-12-12 16:14:46.167189982 +0000 UTC m=+1713.335829583" watchObservedRunningTime="2025-12-12 16:14:46.191488979 +0000 UTC m=+1713.360128580" Dec 12 16:14:47 crc kubenswrapper[4693]: I1212 16:14:47.135070 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 12 16:14:47 crc kubenswrapper[4693]: I1212 16:14:47.844447 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:49 crc kubenswrapper[4693]: I1212 16:14:49.155196 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f145842b-33ac-4d05-b482-706a9cedb695" containerName="ceilometer-central-agent" containerID="cri-o://6822b9c4d5daf443ad07e167f7d0e2cc41dd18508802a6f32544a3d288408bc1" gracePeriod=30 Dec 12 16:14:49 crc kubenswrapper[4693]: I1212 16:14:49.155241 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f145842b-33ac-4d05-b482-706a9cedb695" containerName="sg-core" containerID="cri-o://4a31cf28805453c7af7c48a1a570e4a1721332c5c72985be48ada93cee1c10db" gracePeriod=30 Dec 12 16:14:49 crc kubenswrapper[4693]: I1212 16:14:49.155268 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f145842b-33ac-4d05-b482-706a9cedb695" containerName="proxy-httpd" containerID="cri-o://b92e0bd87bad16f893929f322254f4745634d6327681fea4d798e11f89a17487" gracePeriod=30 Dec 12 16:14:49 crc kubenswrapper[4693]: I1212 16:14:49.155322 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f145842b-33ac-4d05-b482-706a9cedb695" containerName="ceilometer-notification-agent" containerID="cri-o://decc1783d7ef63470404f955d978ddb45a77e3c44e3f8ebd15ceab5c4dc8d3c7" gracePeriod=30 Dec 12 16:14:49 crc kubenswrapper[4693]: I1212 16:14:49.763188 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.168588 4693 generic.go:334] "Generic (PLEG): container finished" podID="f145842b-33ac-4d05-b482-706a9cedb695" containerID="b92e0bd87bad16f893929f322254f4745634d6327681fea4d798e11f89a17487" exitCode=0 Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.169363 4693 generic.go:334] "Generic (PLEG): container finished" podID="f145842b-33ac-4d05-b482-706a9cedb695" containerID="4a31cf28805453c7af7c48a1a570e4a1721332c5c72985be48ada93cee1c10db" exitCode=2 Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.169400 4693 generic.go:334] "Generic (PLEG): container finished" podID="f145842b-33ac-4d05-b482-706a9cedb695" containerID="decc1783d7ef63470404f955d978ddb45a77e3c44e3f8ebd15ceab5c4dc8d3c7" exitCode=0 Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.168645 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f145842b-33ac-4d05-b482-706a9cedb695","Type":"ContainerDied","Data":"b92e0bd87bad16f893929f322254f4745634d6327681fea4d798e11f89a17487"} Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.169445 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f145842b-33ac-4d05-b482-706a9cedb695","Type":"ContainerDied","Data":"4a31cf28805453c7af7c48a1a570e4a1721332c5c72985be48ada93cee1c10db"} Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.169459 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f145842b-33ac-4d05-b482-706a9cedb695","Type":"ContainerDied","Data":"decc1783d7ef63470404f955d978ddb45a77e3c44e3f8ebd15ceab5c4dc8d3c7"} Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.243504 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-jm5sn"] Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.245499 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jm5sn" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.247375 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.250209 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.267298 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-jm5sn"] Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.364722 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brrkh\" (UniqueName: \"kubernetes.io/projected/47fe2ff8-dc3c-4e98-8502-da788efb57aa-kube-api-access-brrkh\") pod \"nova-cell0-cell-mapping-jm5sn\" (UID: \"47fe2ff8-dc3c-4e98-8502-da788efb57aa\") " pod="openstack/nova-cell0-cell-mapping-jm5sn" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.364826 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47fe2ff8-dc3c-4e98-8502-da788efb57aa-scripts\") pod \"nova-cell0-cell-mapping-jm5sn\" (UID: \"47fe2ff8-dc3c-4e98-8502-da788efb57aa\") " pod="openstack/nova-cell0-cell-mapping-jm5sn" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.364852 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47fe2ff8-dc3c-4e98-8502-da788efb57aa-config-data\") pod \"nova-cell0-cell-mapping-jm5sn\" (UID: \"47fe2ff8-dc3c-4e98-8502-da788efb57aa\") " pod="openstack/nova-cell0-cell-mapping-jm5sn" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.364899 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47fe2ff8-dc3c-4e98-8502-da788efb57aa-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jm5sn\" (UID: \"47fe2ff8-dc3c-4e98-8502-da788efb57aa\") " pod="openstack/nova-cell0-cell-mapping-jm5sn" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.429560 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.432037 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.434963 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.455213 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.481394 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brrkh\" (UniqueName: \"kubernetes.io/projected/47fe2ff8-dc3c-4e98-8502-da788efb57aa-kube-api-access-brrkh\") pod \"nova-cell0-cell-mapping-jm5sn\" (UID: \"47fe2ff8-dc3c-4e98-8502-da788efb57aa\") " pod="openstack/nova-cell0-cell-mapping-jm5sn" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.481993 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47fe2ff8-dc3c-4e98-8502-da788efb57aa-scripts\") pod \"nova-cell0-cell-mapping-jm5sn\" (UID: \"47fe2ff8-dc3c-4e98-8502-da788efb57aa\") " pod="openstack/nova-cell0-cell-mapping-jm5sn" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.482137 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47fe2ff8-dc3c-4e98-8502-da788efb57aa-config-data\") pod \"nova-cell0-cell-mapping-jm5sn\" (UID: \"47fe2ff8-dc3c-4e98-8502-da788efb57aa\") " pod="openstack/nova-cell0-cell-mapping-jm5sn" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.482716 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47fe2ff8-dc3c-4e98-8502-da788efb57aa-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jm5sn\" (UID: \"47fe2ff8-dc3c-4e98-8502-da788efb57aa\") " pod="openstack/nova-cell0-cell-mapping-jm5sn" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.506201 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.513048 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.515398 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47fe2ff8-dc3c-4e98-8502-da788efb57aa-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jm5sn\" (UID: \"47fe2ff8-dc3c-4e98-8502-da788efb57aa\") " pod="openstack/nova-cell0-cell-mapping-jm5sn" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.519627 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brrkh\" (UniqueName: \"kubernetes.io/projected/47fe2ff8-dc3c-4e98-8502-da788efb57aa-kube-api-access-brrkh\") pod \"nova-cell0-cell-mapping-jm5sn\" (UID: \"47fe2ff8-dc3c-4e98-8502-da788efb57aa\") " pod="openstack/nova-cell0-cell-mapping-jm5sn" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.524518 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47fe2ff8-dc3c-4e98-8502-da788efb57aa-scripts\") pod \"nova-cell0-cell-mapping-jm5sn\" (UID: \"47fe2ff8-dc3c-4e98-8502-da788efb57aa\") " pod="openstack/nova-cell0-cell-mapping-jm5sn" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.526517 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47fe2ff8-dc3c-4e98-8502-da788efb57aa-config-data\") pod \"nova-cell0-cell-mapping-jm5sn\" (UID: \"47fe2ff8-dc3c-4e98-8502-da788efb57aa\") " pod="openstack/nova-cell0-cell-mapping-jm5sn" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.540040 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.540659 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.567294 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jm5sn" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.579547 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.581288 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.582788 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.585750 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6dch\" (UniqueName: \"kubernetes.io/projected/ff9d8d2e-ddae-487e-b259-334e7df154d4-kube-api-access-f6dch\") pod \"nova-api-0\" (UID: \"ff9d8d2e-ddae-487e-b259-334e7df154d4\") " pod="openstack/nova-api-0" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.585830 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff9d8d2e-ddae-487e-b259-334e7df154d4-config-data\") pod \"nova-api-0\" (UID: \"ff9d8d2e-ddae-487e-b259-334e7df154d4\") " pod="openstack/nova-api-0" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.585858 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff9d8d2e-ddae-487e-b259-334e7df154d4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff9d8d2e-ddae-487e-b259-334e7df154d4\") " pod="openstack/nova-api-0" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.589388 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff9d8d2e-ddae-487e-b259-334e7df154d4-logs\") pod \"nova-api-0\" (UID: \"ff9d8d2e-ddae-487e-b259-334e7df154d4\") " pod="openstack/nova-api-0" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.600047 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-4447h"] Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.601907 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-4447h" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.626715 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-4447h"] Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.661362 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.695331 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff9d8d2e-ddae-487e-b259-334e7df154d4-logs\") pod \"nova-api-0\" (UID: \"ff9d8d2e-ddae-487e-b259-334e7df154d4\") " pod="openstack/nova-api-0" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.695620 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/870d139b-dca4-4055-acd5-6b264b9cf889-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"870d139b-dca4-4055-acd5-6b264b9cf889\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.695700 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzcf9\" (UniqueName: \"kubernetes.io/projected/870d139b-dca4-4055-acd5-6b264b9cf889-kube-api-access-dzcf9\") pod \"nova-cell1-novncproxy-0\" (UID: \"870d139b-dca4-4055-acd5-6b264b9cf889\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.695726 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h8d6\" (UniqueName: \"kubernetes.io/projected/7179e70d-0681-41fa-ba91-298bd275b282-kube-api-access-7h8d6\") pod \"nova-scheduler-0\" (UID: \"7179e70d-0681-41fa-ba91-298bd275b282\") " pod="openstack/nova-scheduler-0" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.695894 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/870d139b-dca4-4055-acd5-6b264b9cf889-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"870d139b-dca4-4055-acd5-6b264b9cf889\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.695913 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6dch\" (UniqueName: \"kubernetes.io/projected/ff9d8d2e-ddae-487e-b259-334e7df154d4-kube-api-access-f6dch\") pod \"nova-api-0\" (UID: \"ff9d8d2e-ddae-487e-b259-334e7df154d4\") " pod="openstack/nova-api-0" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.695955 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7179e70d-0681-41fa-ba91-298bd275b282-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7179e70d-0681-41fa-ba91-298bd275b282\") " pod="openstack/nova-scheduler-0" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.696532 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff9d8d2e-ddae-487e-b259-334e7df154d4-config-data\") pod \"nova-api-0\" (UID: \"ff9d8d2e-ddae-487e-b259-334e7df154d4\") " pod="openstack/nova-api-0" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.696564 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7179e70d-0681-41fa-ba91-298bd275b282-config-data\") pod \"nova-scheduler-0\" (UID: \"7179e70d-0681-41fa-ba91-298bd275b282\") " pod="openstack/nova-scheduler-0" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.696582 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff9d8d2e-ddae-487e-b259-334e7df154d4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff9d8d2e-ddae-487e-b259-334e7df154d4\") " pod="openstack/nova-api-0" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.699044 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff9d8d2e-ddae-487e-b259-334e7df154d4-logs\") pod \"nova-api-0\" (UID: \"ff9d8d2e-ddae-487e-b259-334e7df154d4\") " pod="openstack/nova-api-0" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.705293 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff9d8d2e-ddae-487e-b259-334e7df154d4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff9d8d2e-ddae-487e-b259-334e7df154d4\") " pod="openstack/nova-api-0" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.736986 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff9d8d2e-ddae-487e-b259-334e7df154d4-config-data\") pod \"nova-api-0\" (UID: \"ff9d8d2e-ddae-487e-b259-334e7df154d4\") " pod="openstack/nova-api-0" Dec 12 16:14:50 crc kubenswrapper[4693]: I1212 16:14:50.774370 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6dch\" (UniqueName: \"kubernetes.io/projected/ff9d8d2e-ddae-487e-b259-334e7df154d4-kube-api-access-f6dch\") pod \"nova-api-0\" (UID: \"ff9d8d2e-ddae-487e-b259-334e7df154d4\") " pod="openstack/nova-api-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.804873 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/870d139b-dca4-4055-acd5-6b264b9cf889-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"870d139b-dca4-4055-acd5-6b264b9cf889\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.805020 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7179e70d-0681-41fa-ba91-298bd275b282-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7179e70d-0681-41fa-ba91-298bd275b282\") " pod="openstack/nova-scheduler-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.805285 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6250357e-de93-4024-acda-b1ebbf788eca-operator-scripts\") pod \"aodh-db-create-4447h\" (UID: \"6250357e-de93-4024-acda-b1ebbf788eca\") " pod="openstack/aodh-db-create-4447h" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.805327 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7179e70d-0681-41fa-ba91-298bd275b282-config-data\") pod \"nova-scheduler-0\" (UID: \"7179e70d-0681-41fa-ba91-298bd275b282\") " pod="openstack/nova-scheduler-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.805521 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/870d139b-dca4-4055-acd5-6b264b9cf889-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"870d139b-dca4-4055-acd5-6b264b9cf889\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.806311 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzcf9\" (UniqueName: \"kubernetes.io/projected/870d139b-dca4-4055-acd5-6b264b9cf889-kube-api-access-dzcf9\") pod \"nova-cell1-novncproxy-0\" (UID: \"870d139b-dca4-4055-acd5-6b264b9cf889\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.806359 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h8d6\" (UniqueName: \"kubernetes.io/projected/7179e70d-0681-41fa-ba91-298bd275b282-kube-api-access-7h8d6\") pod \"nova-scheduler-0\" (UID: \"7179e70d-0681-41fa-ba91-298bd275b282\") " pod="openstack/nova-scheduler-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.806654 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dhhd\" (UniqueName: \"kubernetes.io/projected/6250357e-de93-4024-acda-b1ebbf788eca-kube-api-access-2dhhd\") pod \"aodh-db-create-4447h\" (UID: \"6250357e-de93-4024-acda-b1ebbf788eca\") " pod="openstack/aodh-db-create-4447h" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.818983 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/870d139b-dca4-4055-acd5-6b264b9cf889-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"870d139b-dca4-4055-acd5-6b264b9cf889\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.819720 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7179e70d-0681-41fa-ba91-298bd275b282-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7179e70d-0681-41fa-ba91-298bd275b282\") " pod="openstack/nova-scheduler-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.824629 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/870d139b-dca4-4055-acd5-6b264b9cf889-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"870d139b-dca4-4055-acd5-6b264b9cf889\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.837554 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7179e70d-0681-41fa-ba91-298bd275b282-config-data\") pod \"nova-scheduler-0\" (UID: \"7179e70d-0681-41fa-ba91-298bd275b282\") " pod="openstack/nova-scheduler-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.842316 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.845519 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h8d6\" (UniqueName: \"kubernetes.io/projected/7179e70d-0681-41fa-ba91-298bd275b282-kube-api-access-7h8d6\") pod \"nova-scheduler-0\" (UID: \"7179e70d-0681-41fa-ba91-298bd275b282\") " pod="openstack/nova-scheduler-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.854195 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.859379 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.869779 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzcf9\" (UniqueName: \"kubernetes.io/projected/870d139b-dca4-4055-acd5-6b264b9cf889-kube-api-access-dzcf9\") pod \"nova-cell1-novncproxy-0\" (UID: \"870d139b-dca4-4055-acd5-6b264b9cf889\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.897156 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.909198 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.916004 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dhhd\" (UniqueName: \"kubernetes.io/projected/6250357e-de93-4024-acda-b1ebbf788eca-kube-api-access-2dhhd\") pod \"aodh-db-create-4447h\" (UID: \"6250357e-de93-4024-acda-b1ebbf788eca\") " pod="openstack/aodh-db-create-4447h" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.916257 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6250357e-de93-4024-acda-b1ebbf788eca-operator-scripts\") pod \"aodh-db-create-4447h\" (UID: \"6250357e-de93-4024-acda-b1ebbf788eca\") " pod="openstack/aodh-db-create-4447h" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.917133 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6250357e-de93-4024-acda-b1ebbf788eca-operator-scripts\") pod \"aodh-db-create-4447h\" (UID: \"6250357e-de93-4024-acda-b1ebbf788eca\") " pod="openstack/aodh-db-create-4447h" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.926133 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-b95d-account-create-update-km26x"] Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.930244 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-b95d-account-create-update-km26x" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.940350 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.956520 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dhhd\" (UniqueName: \"kubernetes.io/projected/6250357e-de93-4024-acda-b1ebbf788eca-kube-api-access-2dhhd\") pod \"aodh-db-create-4447h\" (UID: \"6250357e-de93-4024-acda-b1ebbf788eca\") " pod="openstack/aodh-db-create-4447h" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:50.981605 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-b95d-account-create-update-km26x"] Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.018971 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72708ce9-a61f-447d-b58b-4f23ac302313-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"72708ce9-a61f-447d-b58b-4f23ac302313\") " pod="openstack/nova-metadata-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.019225 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp9pv\" (UniqueName: \"kubernetes.io/projected/72708ce9-a61f-447d-b58b-4f23ac302313-kube-api-access-cp9pv\") pod \"nova-metadata-0\" (UID: \"72708ce9-a61f-447d-b58b-4f23ac302313\") " pod="openstack/nova-metadata-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.019317 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72708ce9-a61f-447d-b58b-4f23ac302313-logs\") pod \"nova-metadata-0\" (UID: \"72708ce9-a61f-447d-b58b-4f23ac302313\") " pod="openstack/nova-metadata-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.019350 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72708ce9-a61f-447d-b58b-4f23ac302313-config-data\") pod \"nova-metadata-0\" (UID: \"72708ce9-a61f-447d-b58b-4f23ac302313\") " pod="openstack/nova-metadata-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.037903 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-962pm"] Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.056531 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.087989 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.089779 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.093262 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.127202 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-962pm"] Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.122630 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp9pv\" (UniqueName: \"kubernetes.io/projected/72708ce9-a61f-447d-b58b-4f23ac302313-kube-api-access-cp9pv\") pod \"nova-metadata-0\" (UID: \"72708ce9-a61f-447d-b58b-4f23ac302313\") " pod="openstack/nova-metadata-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.135431 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72708ce9-a61f-447d-b58b-4f23ac302313-logs\") pod \"nova-metadata-0\" (UID: \"72708ce9-a61f-447d-b58b-4f23ac302313\") " pod="openstack/nova-metadata-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.135475 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72708ce9-a61f-447d-b58b-4f23ac302313-config-data\") pod \"nova-metadata-0\" (UID: \"72708ce9-a61f-447d-b58b-4f23ac302313\") " pod="openstack/nova-metadata-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.135593 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e9c57e3-dcab-45d3-9013-a7348e6d94ec-operator-scripts\") pod \"aodh-b95d-account-create-update-km26x\" (UID: \"5e9c57e3-dcab-45d3-9013-a7348e6d94ec\") " pod="openstack/aodh-b95d-account-create-update-km26x" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.135882 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72708ce9-a61f-447d-b58b-4f23ac302313-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"72708ce9-a61f-447d-b58b-4f23ac302313\") " pod="openstack/nova-metadata-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.136002 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2rrl\" (UniqueName: \"kubernetes.io/projected/5e9c57e3-dcab-45d3-9013-a7348e6d94ec-kube-api-access-x2rrl\") pod \"aodh-b95d-account-create-update-km26x\" (UID: \"5e9c57e3-dcab-45d3-9013-a7348e6d94ec\") " pod="openstack/aodh-b95d-account-create-update-km26x" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.138154 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72708ce9-a61f-447d-b58b-4f23ac302313-logs\") pod \"nova-metadata-0\" (UID: \"72708ce9-a61f-447d-b58b-4f23ac302313\") " pod="openstack/nova-metadata-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.145224 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp9pv\" (UniqueName: \"kubernetes.io/projected/72708ce9-a61f-447d-b58b-4f23ac302313-kube-api-access-cp9pv\") pod \"nova-metadata-0\" (UID: \"72708ce9-a61f-447d-b58b-4f23ac302313\") " pod="openstack/nova-metadata-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.146492 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72708ce9-a61f-447d-b58b-4f23ac302313-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"72708ce9-a61f-447d-b58b-4f23ac302313\") " pod="openstack/nova-metadata-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.155119 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72708ce9-a61f-447d-b58b-4f23ac302313-config-data\") pod \"nova-metadata-0\" (UID: \"72708ce9-a61f-447d-b58b-4f23ac302313\") " pod="openstack/nova-metadata-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.242002 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-962pm\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.242068 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-962pm\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.242132 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2q4j\" (UniqueName: \"kubernetes.io/projected/9850dbcd-93ea-47b5-a812-9ba8821b8110-kube-api-access-t2q4j\") pod \"dnsmasq-dns-9b86998b5-962pm\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.242166 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2rrl\" (UniqueName: \"kubernetes.io/projected/5e9c57e3-dcab-45d3-9013-a7348e6d94ec-kube-api-access-x2rrl\") pod \"aodh-b95d-account-create-update-km26x\" (UID: \"5e9c57e3-dcab-45d3-9013-a7348e6d94ec\") " pod="openstack/aodh-b95d-account-create-update-km26x" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.242285 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-config\") pod \"dnsmasq-dns-9b86998b5-962pm\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.246697 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e9c57e3-dcab-45d3-9013-a7348e6d94ec-operator-scripts\") pod \"aodh-b95d-account-create-update-km26x\" (UID: \"5e9c57e3-dcab-45d3-9013-a7348e6d94ec\") " pod="openstack/aodh-b95d-account-create-update-km26x" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.247025 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-962pm\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.247120 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-dns-svc\") pod \"dnsmasq-dns-9b86998b5-962pm\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.252197 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e9c57e3-dcab-45d3-9013-a7348e6d94ec-operator-scripts\") pod \"aodh-b95d-account-create-update-km26x\" (UID: \"5e9c57e3-dcab-45d3-9013-a7348e6d94ec\") " pod="openstack/aodh-b95d-account-create-update-km26x" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.275709 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.291176 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2rrl\" (UniqueName: \"kubernetes.io/projected/5e9c57e3-dcab-45d3-9013-a7348e6d94ec-kube-api-access-x2rrl\") pod \"aodh-b95d-account-create-update-km26x\" (UID: \"5e9c57e3-dcab-45d3-9013-a7348e6d94ec\") " pod="openstack/aodh-b95d-account-create-update-km26x" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.310116 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.325235 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-4447h" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.352821 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-dns-svc\") pod \"dnsmasq-dns-9b86998b5-962pm\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.352938 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-962pm\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.353008 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-962pm\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.353058 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2q4j\" (UniqueName: \"kubernetes.io/projected/9850dbcd-93ea-47b5-a812-9ba8821b8110-kube-api-access-t2q4j\") pod \"dnsmasq-dns-9b86998b5-962pm\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.353255 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-config\") pod \"dnsmasq-dns-9b86998b5-962pm\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.353752 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-962pm\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.361367 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-dns-svc\") pod \"dnsmasq-dns-9b86998b5-962pm\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.361973 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-962pm\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.362645 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-962pm\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.362807 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-962pm\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.379050 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.408683 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-config\") pod \"dnsmasq-dns-9b86998b5-962pm\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.427809 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.432037 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2q4j\" (UniqueName: \"kubernetes.io/projected/9850dbcd-93ea-47b5-a812-9ba8821b8110-kube-api-access-t2q4j\") pod \"dnsmasq-dns-9b86998b5-962pm\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.451554 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-b95d-account-create-update-km26x" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.738846 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.762819 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.762861 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.847543 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 12 16:14:51 crc kubenswrapper[4693]: I1212 16:14:51.860966 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.233297 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.233654 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.233671 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.233848 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.606346 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mp8v7"] Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.608097 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mp8v7" Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.611465 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.611684 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.678045 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mp8v7"] Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.757294 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gck5k\" (UniqueName: \"kubernetes.io/projected/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-kube-api-access-gck5k\") pod \"nova-cell1-conductor-db-sync-mp8v7\" (UID: \"0f211a4d-5e81-46e3-a31f-8f359ba8da6f\") " pod="openstack/nova-cell1-conductor-db-sync-mp8v7" Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.757510 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-scripts\") pod \"nova-cell1-conductor-db-sync-mp8v7\" (UID: \"0f211a4d-5e81-46e3-a31f-8f359ba8da6f\") " pod="openstack/nova-cell1-conductor-db-sync-mp8v7" Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.757590 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mp8v7\" (UID: \"0f211a4d-5e81-46e3-a31f-8f359ba8da6f\") " pod="openstack/nova-cell1-conductor-db-sync-mp8v7" Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.757632 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-config-data\") pod \"nova-cell1-conductor-db-sync-mp8v7\" (UID: \"0f211a4d-5e81-46e3-a31f-8f359ba8da6f\") " pod="openstack/nova-cell1-conductor-db-sync-mp8v7" Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.860923 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-scripts\") pod \"nova-cell1-conductor-db-sync-mp8v7\" (UID: \"0f211a4d-5e81-46e3-a31f-8f359ba8da6f\") " pod="openstack/nova-cell1-conductor-db-sync-mp8v7" Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.861068 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mp8v7\" (UID: \"0f211a4d-5e81-46e3-a31f-8f359ba8da6f\") " pod="openstack/nova-cell1-conductor-db-sync-mp8v7" Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.861119 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-config-data\") pod \"nova-cell1-conductor-db-sync-mp8v7\" (UID: \"0f211a4d-5e81-46e3-a31f-8f359ba8da6f\") " pod="openstack/nova-cell1-conductor-db-sync-mp8v7" Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.861170 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gck5k\" (UniqueName: \"kubernetes.io/projected/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-kube-api-access-gck5k\") pod \"nova-cell1-conductor-db-sync-mp8v7\" (UID: \"0f211a4d-5e81-46e3-a31f-8f359ba8da6f\") " pod="openstack/nova-cell1-conductor-db-sync-mp8v7" Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.871788 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-config-data\") pod \"nova-cell1-conductor-db-sync-mp8v7\" (UID: \"0f211a4d-5e81-46e3-a31f-8f359ba8da6f\") " pod="openstack/nova-cell1-conductor-db-sync-mp8v7" Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.878882 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-scripts\") pod \"nova-cell1-conductor-db-sync-mp8v7\" (UID: \"0f211a4d-5e81-46e3-a31f-8f359ba8da6f\") " pod="openstack/nova-cell1-conductor-db-sync-mp8v7" Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.895593 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gck5k\" (UniqueName: \"kubernetes.io/projected/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-kube-api-access-gck5k\") pod \"nova-cell1-conductor-db-sync-mp8v7\" (UID: \"0f211a4d-5e81-46e3-a31f-8f359ba8da6f\") " pod="openstack/nova-cell1-conductor-db-sync-mp8v7" Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.906500 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mp8v7\" (UID: \"0f211a4d-5e81-46e3-a31f-8f359ba8da6f\") " pod="openstack/nova-cell1-conductor-db-sync-mp8v7" Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.950905 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mp8v7" Dec 12 16:14:52 crc kubenswrapper[4693]: I1212 16:14:52.975147 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 16:14:53 crc kubenswrapper[4693]: I1212 16:14:53.031026 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-jm5sn"] Dec 12 16:14:53 crc kubenswrapper[4693]: I1212 16:14:53.088109 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 12 16:14:53 crc kubenswrapper[4693]: I1212 16:14:53.141205 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 16:14:53 crc kubenswrapper[4693]: I1212 16:14:53.157582 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-4447h"] Dec 12 16:14:53 crc kubenswrapper[4693]: I1212 16:14:53.175322 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-b95d-account-create-update-km26x"] Dec 12 16:14:53 crc kubenswrapper[4693]: I1212 16:14:53.188709 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 12 16:14:53 crc kubenswrapper[4693]: I1212 16:14:53.238615 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-962pm"] Dec 12 16:14:53 crc kubenswrapper[4693]: I1212 16:14:53.264923 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-b95d-account-create-update-km26x" event={"ID":"5e9c57e3-dcab-45d3-9013-a7348e6d94ec","Type":"ContainerStarted","Data":"2cbdd9cf1ade61e96b9bd3a84446cb8e5d8f9c1015c83767151679ec0cc49a14"} Dec 12 16:14:53 crc kubenswrapper[4693]: I1212 16:14:53.274799 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"72708ce9-a61f-447d-b58b-4f23ac302313","Type":"ContainerStarted","Data":"6af90412695f790f1cad218df786b749d48e701c8b491779f43f9f8e3b4b0e96"} Dec 12 16:14:53 crc kubenswrapper[4693]: I1212 16:14:53.280976 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"870d139b-dca4-4055-acd5-6b264b9cf889","Type":"ContainerStarted","Data":"069be7727790fb35325d4cba4b3eb425ed3d3591114a932e50945141d7b78ef1"} Dec 12 16:14:53 crc kubenswrapper[4693]: I1212 16:14:53.305209 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff9d8d2e-ddae-487e-b259-334e7df154d4","Type":"ContainerStarted","Data":"8d0bf36620b1dd640e6f18b6a5d0d9cc4e408e96afa6a3563e7be1432e136d21"} Dec 12 16:14:53 crc kubenswrapper[4693]: I1212 16:14:53.315508 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jm5sn" event={"ID":"47fe2ff8-dc3c-4e98-8502-da788efb57aa","Type":"ContainerStarted","Data":"858ab4874a9577644ff1aeb321006ceaeb543acc559b297a19ad33398e2fed09"} Dec 12 16:14:53 crc kubenswrapper[4693]: I1212 16:14:53.321665 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-4447h" event={"ID":"6250357e-de93-4024-acda-b1ebbf788eca","Type":"ContainerStarted","Data":"af310bc361ef644a1b0368a4952b435a23865d63c49c601ceb0bfb840d40ca2e"} Dec 12 16:14:53 crc kubenswrapper[4693]: I1212 16:14:53.332933 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7179e70d-0681-41fa-ba91-298bd275b282","Type":"ContainerStarted","Data":"ac2cd5ee41459461f4be67acc8800f2885f1d0d49ffc626a0fe8f368fbbe3f92"} Dec 12 16:14:53 crc kubenswrapper[4693]: I1212 16:14:53.339952 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-962pm" event={"ID":"9850dbcd-93ea-47b5-a812-9ba8821b8110","Type":"ContainerStarted","Data":"7117ba545edb2e49cdb881b8603f403554e4676439bc477e78f66a693cbdf94f"} Dec 12 16:14:53 crc kubenswrapper[4693]: I1212 16:14:53.725278 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mp8v7"] Dec 12 16:14:53 crc kubenswrapper[4693]: W1212 16:14:53.753069 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f211a4d_5e81_46e3_a31f_8f359ba8da6f.slice/crio-77a46cf73747b12af6dd88a3a8cec4e7d287da738ceb737e9863dafef334d10f WatchSource:0}: Error finding container 77a46cf73747b12af6dd88a3a8cec4e7d287da738ceb737e9863dafef334d10f: Status 404 returned error can't find the container with id 77a46cf73747b12af6dd88a3a8cec4e7d287da738ceb737e9863dafef334d10f Dec 12 16:14:54 crc kubenswrapper[4693]: I1212 16:14:54.363350 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jm5sn" event={"ID":"47fe2ff8-dc3c-4e98-8502-da788efb57aa","Type":"ContainerStarted","Data":"3881cbaeb54bb969750ad936ba7a54cc0d72415c4fa3823a04ea5c3eb32f205d"} Dec 12 16:14:54 crc kubenswrapper[4693]: I1212 16:14:54.370018 4693 generic.go:334] "Generic (PLEG): container finished" podID="6250357e-de93-4024-acda-b1ebbf788eca" containerID="9e0ef1f546e0f10dd65c70ed45c63defd4a8d3458e620a2ffdb68ada4e1b1135" exitCode=0 Dec 12 16:14:54 crc kubenswrapper[4693]: I1212 16:14:54.370090 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-4447h" event={"ID":"6250357e-de93-4024-acda-b1ebbf788eca","Type":"ContainerDied","Data":"9e0ef1f546e0f10dd65c70ed45c63defd4a8d3458e620a2ffdb68ada4e1b1135"} Dec 12 16:14:54 crc kubenswrapper[4693]: I1212 16:14:54.372227 4693 generic.go:334] "Generic (PLEG): container finished" podID="9850dbcd-93ea-47b5-a812-9ba8821b8110" containerID="e8f9699fa6eca2378fdefee886718df35104b69fb380c66f17b82350917426f4" exitCode=0 Dec 12 16:14:54 crc kubenswrapper[4693]: I1212 16:14:54.372536 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-962pm" event={"ID":"9850dbcd-93ea-47b5-a812-9ba8821b8110","Type":"ContainerDied","Data":"e8f9699fa6eca2378fdefee886718df35104b69fb380c66f17b82350917426f4"} Dec 12 16:14:54 crc kubenswrapper[4693]: I1212 16:14:54.376888 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mp8v7" event={"ID":"0f211a4d-5e81-46e3-a31f-8f359ba8da6f","Type":"ContainerStarted","Data":"6bca49c660e88075753b269ec3c652bb4f3ec69effaa3def39b1bd26971e6ae8"} Dec 12 16:14:54 crc kubenswrapper[4693]: I1212 16:14:54.376931 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mp8v7" event={"ID":"0f211a4d-5e81-46e3-a31f-8f359ba8da6f","Type":"ContainerStarted","Data":"77a46cf73747b12af6dd88a3a8cec4e7d287da738ceb737e9863dafef334d10f"} Dec 12 16:14:54 crc kubenswrapper[4693]: I1212 16:14:54.384474 4693 generic.go:334] "Generic (PLEG): container finished" podID="5e9c57e3-dcab-45d3-9013-a7348e6d94ec" containerID="decf2d28d710671f8eecd2570b5ca3183b09dc12e6b450d48ed874bd7344afc9" exitCode=0 Dec 12 16:14:54 crc kubenswrapper[4693]: I1212 16:14:54.384521 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-b95d-account-create-update-km26x" event={"ID":"5e9c57e3-dcab-45d3-9013-a7348e6d94ec","Type":"ContainerDied","Data":"decf2d28d710671f8eecd2570b5ca3183b09dc12e6b450d48ed874bd7344afc9"} Dec 12 16:14:54 crc kubenswrapper[4693]: I1212 16:14:54.391429 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-jm5sn" podStartSLOduration=4.391410405 podStartE2EDuration="4.391410405s" podCreationTimestamp="2025-12-12 16:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:14:54.381621131 +0000 UTC m=+1721.550260732" watchObservedRunningTime="2025-12-12 16:14:54.391410405 +0000 UTC m=+1721.560050006" Dec 12 16:14:54 crc kubenswrapper[4693]: I1212 16:14:54.474980 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-mp8v7" podStartSLOduration=2.474957154 podStartE2EDuration="2.474957154s" podCreationTimestamp="2025-12-12 16:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:14:54.466861545 +0000 UTC m=+1721.635501146" watchObservedRunningTime="2025-12-12 16:14:54.474957154 +0000 UTC m=+1721.643596755" Dec 12 16:14:54 crc kubenswrapper[4693]: I1212 16:14:54.949413 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 16:14:54 crc kubenswrapper[4693]: I1212 16:14:54.975556 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.384945 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.521372 4693 generic.go:334] "Generic (PLEG): container finished" podID="f145842b-33ac-4d05-b482-706a9cedb695" containerID="6822b9c4d5daf443ad07e167f7d0e2cc41dd18508802a6f32544a3d288408bc1" exitCode=0 Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.521445 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f145842b-33ac-4d05-b482-706a9cedb695","Type":"ContainerDied","Data":"6822b9c4d5daf443ad07e167f7d0e2cc41dd18508802a6f32544a3d288408bc1"} Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.521475 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f145842b-33ac-4d05-b482-706a9cedb695","Type":"ContainerDied","Data":"fdcaa6ca7117162a4481722edefdfeea0f030ab739e6136b8f9dfe53ec2e26d8"} Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.521494 4693 scope.go:117] "RemoveContainer" containerID="b92e0bd87bad16f893929f322254f4745634d6327681fea4d798e11f89a17487" Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.521655 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.523750 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f145842b-33ac-4d05-b482-706a9cedb695-run-httpd\") pod \"f145842b-33ac-4d05-b482-706a9cedb695\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.523793 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46mrl\" (UniqueName: \"kubernetes.io/projected/f145842b-33ac-4d05-b482-706a9cedb695-kube-api-access-46mrl\") pod \"f145842b-33ac-4d05-b482-706a9cedb695\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.523882 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-sg-core-conf-yaml\") pod \"f145842b-33ac-4d05-b482-706a9cedb695\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.523967 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-scripts\") pod \"f145842b-33ac-4d05-b482-706a9cedb695\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.523992 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f145842b-33ac-4d05-b482-706a9cedb695-log-httpd\") pod \"f145842b-33ac-4d05-b482-706a9cedb695\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.524039 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-config-data\") pod \"f145842b-33ac-4d05-b482-706a9cedb695\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.524122 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-combined-ca-bundle\") pod \"f145842b-33ac-4d05-b482-706a9cedb695\" (UID: \"f145842b-33ac-4d05-b482-706a9cedb695\") " Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.531296 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f145842b-33ac-4d05-b482-706a9cedb695-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f145842b-33ac-4d05-b482-706a9cedb695" (UID: "f145842b-33ac-4d05-b482-706a9cedb695"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.533446 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f145842b-33ac-4d05-b482-706a9cedb695-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f145842b-33ac-4d05-b482-706a9cedb695" (UID: "f145842b-33ac-4d05-b482-706a9cedb695"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.587895 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f145842b-33ac-4d05-b482-706a9cedb695-kube-api-access-46mrl" (OuterVolumeSpecName: "kube-api-access-46mrl") pod "f145842b-33ac-4d05-b482-706a9cedb695" (UID: "f145842b-33ac-4d05-b482-706a9cedb695"). InnerVolumeSpecName "kube-api-access-46mrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.605654 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-scripts" (OuterVolumeSpecName: "scripts") pod "f145842b-33ac-4d05-b482-706a9cedb695" (UID: "f145842b-33ac-4d05-b482-706a9cedb695"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.619094 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-962pm" event={"ID":"9850dbcd-93ea-47b5-a812-9ba8821b8110","Type":"ContainerStarted","Data":"8b690b5cc50e1756e7f84b80c990b953384b8776b66347598be63f531c66a2b4"} Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.620889 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.627422 4693 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f145842b-33ac-4d05-b482-706a9cedb695-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.627465 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46mrl\" (UniqueName: \"kubernetes.io/projected/f145842b-33ac-4d05-b482-706a9cedb695-kube-api-access-46mrl\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.627478 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.627486 4693 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f145842b-33ac-4d05-b482-706a9cedb695-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.689753 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f145842b-33ac-4d05-b482-706a9cedb695" (UID: "f145842b-33ac-4d05-b482-706a9cedb695"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.731766 4693 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.784263 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f145842b-33ac-4d05-b482-706a9cedb695" (UID: "f145842b-33ac-4d05-b482-706a9cedb695"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.833559 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.872171 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-config-data" (OuterVolumeSpecName: "config-data") pod "f145842b-33ac-4d05-b482-706a9cedb695" (UID: "f145842b-33ac-4d05-b482-706a9cedb695"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:14:55 crc kubenswrapper[4693]: I1212 16:14:55.936509 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f145842b-33ac-4d05-b482-706a9cedb695-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.181059 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-962pm" podStartSLOduration=6.181031807 podStartE2EDuration="6.181031807s" podCreationTimestamp="2025-12-12 16:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:14:55.674792461 +0000 UTC m=+1722.843432052" watchObservedRunningTime="2025-12-12 16:14:56.181031807 +0000 UTC m=+1723.349671418" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.204425 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.276750 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.298457 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:56 crc kubenswrapper[4693]: E1212 16:14:56.299205 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f145842b-33ac-4d05-b482-706a9cedb695" containerName="proxy-httpd" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.299233 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f145842b-33ac-4d05-b482-706a9cedb695" containerName="proxy-httpd" Dec 12 16:14:56 crc kubenswrapper[4693]: E1212 16:14:56.299261 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f145842b-33ac-4d05-b482-706a9cedb695" containerName="sg-core" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.299303 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f145842b-33ac-4d05-b482-706a9cedb695" containerName="sg-core" Dec 12 16:14:56 crc kubenswrapper[4693]: E1212 16:14:56.299332 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f145842b-33ac-4d05-b482-706a9cedb695" containerName="ceilometer-central-agent" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.299342 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f145842b-33ac-4d05-b482-706a9cedb695" containerName="ceilometer-central-agent" Dec 12 16:14:56 crc kubenswrapper[4693]: E1212 16:14:56.299383 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f145842b-33ac-4d05-b482-706a9cedb695" containerName="ceilometer-notification-agent" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.299393 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f145842b-33ac-4d05-b482-706a9cedb695" containerName="ceilometer-notification-agent" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.299732 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="f145842b-33ac-4d05-b482-706a9cedb695" containerName="ceilometer-notification-agent" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.299761 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="f145842b-33ac-4d05-b482-706a9cedb695" containerName="ceilometer-central-agent" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.299790 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="f145842b-33ac-4d05-b482-706a9cedb695" containerName="proxy-httpd" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.299816 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="f145842b-33ac-4d05-b482-706a9cedb695" containerName="sg-core" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.303057 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.310555 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.313865 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.330968 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.380381 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.380526 4693 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.384346 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.464940 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38afd4c5-3e1b-4a5f-a82c-ac270a369417-run-httpd\") pod \"ceilometer-0\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.465028 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.465064 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.465137 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-scripts\") pod \"ceilometer-0\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.465286 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g6cg\" (UniqueName: \"kubernetes.io/projected/38afd4c5-3e1b-4a5f-a82c-ac270a369417-kube-api-access-5g6cg\") pod \"ceilometer-0\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.465368 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38afd4c5-3e1b-4a5f-a82c-ac270a369417-log-httpd\") pod \"ceilometer-0\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.465391 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-config-data\") pod \"ceilometer-0\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.567251 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-scripts\") pod \"ceilometer-0\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.572256 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g6cg\" (UniqueName: \"kubernetes.io/projected/38afd4c5-3e1b-4a5f-a82c-ac270a369417-kube-api-access-5g6cg\") pod \"ceilometer-0\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.572960 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38afd4c5-3e1b-4a5f-a82c-ac270a369417-log-httpd\") pod \"ceilometer-0\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.573012 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-config-data\") pod \"ceilometer-0\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.574094 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38afd4c5-3e1b-4a5f-a82c-ac270a369417-log-httpd\") pod \"ceilometer-0\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.574234 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38afd4c5-3e1b-4a5f-a82c-ac270a369417-run-httpd\") pod \"ceilometer-0\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.574332 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.574376 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.575109 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38afd4c5-3e1b-4a5f-a82c-ac270a369417-run-httpd\") pod \"ceilometer-0\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.579017 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-config-data\") pod \"ceilometer-0\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.579607 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-scripts\") pod \"ceilometer-0\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.587778 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.588140 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g6cg\" (UniqueName: \"kubernetes.io/projected/38afd4c5-3e1b-4a5f-a82c-ac270a369417-kube-api-access-5g6cg\") pod \"ceilometer-0\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.590553 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.640887 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.696260 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.696438 4693 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 16:14:56 crc kubenswrapper[4693]: I1212 16:14:56.697069 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 12 16:14:57 crc kubenswrapper[4693]: I1212 16:14:57.298126 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-b95d-account-create-update-km26x" Dec 12 16:14:57 crc kubenswrapper[4693]: I1212 16:14:57.365650 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:14:57 crc kubenswrapper[4693]: E1212 16:14:57.365990 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:14:57 crc kubenswrapper[4693]: I1212 16:14:57.386061 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f145842b-33ac-4d05-b482-706a9cedb695" path="/var/lib/kubelet/pods/f145842b-33ac-4d05-b482-706a9cedb695/volumes" Dec 12 16:14:57 crc kubenswrapper[4693]: I1212 16:14:57.413696 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e9c57e3-dcab-45d3-9013-a7348e6d94ec-operator-scripts\") pod \"5e9c57e3-dcab-45d3-9013-a7348e6d94ec\" (UID: \"5e9c57e3-dcab-45d3-9013-a7348e6d94ec\") " Dec 12 16:14:57 crc kubenswrapper[4693]: I1212 16:14:57.413972 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2rrl\" (UniqueName: \"kubernetes.io/projected/5e9c57e3-dcab-45d3-9013-a7348e6d94ec-kube-api-access-x2rrl\") pod \"5e9c57e3-dcab-45d3-9013-a7348e6d94ec\" (UID: \"5e9c57e3-dcab-45d3-9013-a7348e6d94ec\") " Dec 12 16:14:57 crc kubenswrapper[4693]: I1212 16:14:57.416215 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e9c57e3-dcab-45d3-9013-a7348e6d94ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e9c57e3-dcab-45d3-9013-a7348e6d94ec" (UID: "5e9c57e3-dcab-45d3-9013-a7348e6d94ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:14:57 crc kubenswrapper[4693]: I1212 16:14:57.437003 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e9c57e3-dcab-45d3-9013-a7348e6d94ec-kube-api-access-x2rrl" (OuterVolumeSpecName: "kube-api-access-x2rrl") pod "5e9c57e3-dcab-45d3-9013-a7348e6d94ec" (UID: "5e9c57e3-dcab-45d3-9013-a7348e6d94ec"). InnerVolumeSpecName "kube-api-access-x2rrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:14:57 crc kubenswrapper[4693]: I1212 16:14:57.516943 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e9c57e3-dcab-45d3-9013-a7348e6d94ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:57 crc kubenswrapper[4693]: I1212 16:14:57.516975 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2rrl\" (UniqueName: \"kubernetes.io/projected/5e9c57e3-dcab-45d3-9013-a7348e6d94ec-kube-api-access-x2rrl\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:57 crc kubenswrapper[4693]: I1212 16:14:57.688465 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-b95d-account-create-update-km26x" Dec 12 16:14:57 crc kubenswrapper[4693]: I1212 16:14:57.689399 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-b95d-account-create-update-km26x" event={"ID":"5e9c57e3-dcab-45d3-9013-a7348e6d94ec","Type":"ContainerDied","Data":"2cbdd9cf1ade61e96b9bd3a84446cb8e5d8f9c1015c83767151679ec0cc49a14"} Dec 12 16:14:57 crc kubenswrapper[4693]: I1212 16:14:57.689485 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cbdd9cf1ade61e96b9bd3a84446cb8e5d8f9c1015c83767151679ec0cc49a14" Dec 12 16:14:58 crc kubenswrapper[4693]: I1212 16:14:58.091580 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-4447h" Dec 12 16:14:58 crc kubenswrapper[4693]: I1212 16:14:58.237224 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6250357e-de93-4024-acda-b1ebbf788eca-operator-scripts\") pod \"6250357e-de93-4024-acda-b1ebbf788eca\" (UID: \"6250357e-de93-4024-acda-b1ebbf788eca\") " Dec 12 16:14:58 crc kubenswrapper[4693]: I1212 16:14:58.237774 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dhhd\" (UniqueName: \"kubernetes.io/projected/6250357e-de93-4024-acda-b1ebbf788eca-kube-api-access-2dhhd\") pod \"6250357e-de93-4024-acda-b1ebbf788eca\" (UID: \"6250357e-de93-4024-acda-b1ebbf788eca\") " Dec 12 16:14:58 crc kubenswrapper[4693]: I1212 16:14:58.238058 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6250357e-de93-4024-acda-b1ebbf788eca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6250357e-de93-4024-acda-b1ebbf788eca" (UID: "6250357e-de93-4024-acda-b1ebbf788eca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:14:58 crc kubenswrapper[4693]: I1212 16:14:58.238618 4693 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6250357e-de93-4024-acda-b1ebbf788eca-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:58 crc kubenswrapper[4693]: I1212 16:14:58.246936 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6250357e-de93-4024-acda-b1ebbf788eca-kube-api-access-2dhhd" (OuterVolumeSpecName: "kube-api-access-2dhhd") pod "6250357e-de93-4024-acda-b1ebbf788eca" (UID: "6250357e-de93-4024-acda-b1ebbf788eca"). InnerVolumeSpecName "kube-api-access-2dhhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:14:58 crc kubenswrapper[4693]: I1212 16:14:58.344085 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dhhd\" (UniqueName: \"kubernetes.io/projected/6250357e-de93-4024-acda-b1ebbf788eca-kube-api-access-2dhhd\") on node \"crc\" DevicePath \"\"" Dec 12 16:14:58 crc kubenswrapper[4693]: I1212 16:14:58.699019 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-4447h" event={"ID":"6250357e-de93-4024-acda-b1ebbf788eca","Type":"ContainerDied","Data":"af310bc361ef644a1b0368a4952b435a23865d63c49c601ceb0bfb840d40ca2e"} Dec 12 16:14:58 crc kubenswrapper[4693]: I1212 16:14:58.699365 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af310bc361ef644a1b0368a4952b435a23865d63c49c601ceb0bfb840d40ca2e" Dec 12 16:14:58 crc kubenswrapper[4693]: I1212 16:14:58.699102 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-4447h" Dec 12 16:14:59 crc kubenswrapper[4693]: I1212 16:14:59.087996 4693 scope.go:117] "RemoveContainer" containerID="4a31cf28805453c7af7c48a1a570e4a1721332c5c72985be48ada93cee1c10db" Dec 12 16:14:59 crc kubenswrapper[4693]: I1212 16:14:59.156669 4693 scope.go:117] "RemoveContainer" containerID="decc1783d7ef63470404f955d978ddb45a77e3c44e3f8ebd15ceab5c4dc8d3c7" Dec 12 16:14:59 crc kubenswrapper[4693]: I1212 16:14:59.408406 4693 scope.go:117] "RemoveContainer" containerID="6822b9c4d5daf443ad07e167f7d0e2cc41dd18508802a6f32544a3d288408bc1" Dec 12 16:14:59 crc kubenswrapper[4693]: I1212 16:14:59.442305 4693 scope.go:117] "RemoveContainer" containerID="b92e0bd87bad16f893929f322254f4745634d6327681fea4d798e11f89a17487" Dec 12 16:14:59 crc kubenswrapper[4693]: E1212 16:14:59.442863 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b92e0bd87bad16f893929f322254f4745634d6327681fea4d798e11f89a17487\": container with ID starting with b92e0bd87bad16f893929f322254f4745634d6327681fea4d798e11f89a17487 not found: ID does not exist" containerID="b92e0bd87bad16f893929f322254f4745634d6327681fea4d798e11f89a17487" Dec 12 16:14:59 crc kubenswrapper[4693]: I1212 16:14:59.442905 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b92e0bd87bad16f893929f322254f4745634d6327681fea4d798e11f89a17487"} err="failed to get container status \"b92e0bd87bad16f893929f322254f4745634d6327681fea4d798e11f89a17487\": rpc error: code = NotFound desc = could not find container \"b92e0bd87bad16f893929f322254f4745634d6327681fea4d798e11f89a17487\": container with ID starting with b92e0bd87bad16f893929f322254f4745634d6327681fea4d798e11f89a17487 not found: ID does not exist" Dec 12 16:14:59 crc kubenswrapper[4693]: I1212 16:14:59.442952 4693 scope.go:117] "RemoveContainer" containerID="4a31cf28805453c7af7c48a1a570e4a1721332c5c72985be48ada93cee1c10db" Dec 12 16:14:59 crc kubenswrapper[4693]: E1212 16:14:59.444501 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a31cf28805453c7af7c48a1a570e4a1721332c5c72985be48ada93cee1c10db\": container with ID starting with 4a31cf28805453c7af7c48a1a570e4a1721332c5c72985be48ada93cee1c10db not found: ID does not exist" containerID="4a31cf28805453c7af7c48a1a570e4a1721332c5c72985be48ada93cee1c10db" Dec 12 16:14:59 crc kubenswrapper[4693]: I1212 16:14:59.444545 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a31cf28805453c7af7c48a1a570e4a1721332c5c72985be48ada93cee1c10db"} err="failed to get container status \"4a31cf28805453c7af7c48a1a570e4a1721332c5c72985be48ada93cee1c10db\": rpc error: code = NotFound desc = could not find container \"4a31cf28805453c7af7c48a1a570e4a1721332c5c72985be48ada93cee1c10db\": container with ID starting with 4a31cf28805453c7af7c48a1a570e4a1721332c5c72985be48ada93cee1c10db not found: ID does not exist" Dec 12 16:14:59 crc kubenswrapper[4693]: I1212 16:14:59.444577 4693 scope.go:117] "RemoveContainer" containerID="decc1783d7ef63470404f955d978ddb45a77e3c44e3f8ebd15ceab5c4dc8d3c7" Dec 12 16:14:59 crc kubenswrapper[4693]: E1212 16:14:59.444889 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"decc1783d7ef63470404f955d978ddb45a77e3c44e3f8ebd15ceab5c4dc8d3c7\": container with ID starting with decc1783d7ef63470404f955d978ddb45a77e3c44e3f8ebd15ceab5c4dc8d3c7 not found: ID does not exist" containerID="decc1783d7ef63470404f955d978ddb45a77e3c44e3f8ebd15ceab5c4dc8d3c7" Dec 12 16:14:59 crc kubenswrapper[4693]: I1212 16:14:59.444911 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"decc1783d7ef63470404f955d978ddb45a77e3c44e3f8ebd15ceab5c4dc8d3c7"} err="failed to get container status \"decc1783d7ef63470404f955d978ddb45a77e3c44e3f8ebd15ceab5c4dc8d3c7\": rpc error: code = NotFound desc = could not find container \"decc1783d7ef63470404f955d978ddb45a77e3c44e3f8ebd15ceab5c4dc8d3c7\": container with ID starting with decc1783d7ef63470404f955d978ddb45a77e3c44e3f8ebd15ceab5c4dc8d3c7 not found: ID does not exist" Dec 12 16:14:59 crc kubenswrapper[4693]: I1212 16:14:59.444924 4693 scope.go:117] "RemoveContainer" containerID="6822b9c4d5daf443ad07e167f7d0e2cc41dd18508802a6f32544a3d288408bc1" Dec 12 16:14:59 crc kubenswrapper[4693]: E1212 16:14:59.445167 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6822b9c4d5daf443ad07e167f7d0e2cc41dd18508802a6f32544a3d288408bc1\": container with ID starting with 6822b9c4d5daf443ad07e167f7d0e2cc41dd18508802a6f32544a3d288408bc1 not found: ID does not exist" containerID="6822b9c4d5daf443ad07e167f7d0e2cc41dd18508802a6f32544a3d288408bc1" Dec 12 16:14:59 crc kubenswrapper[4693]: I1212 16:14:59.445193 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6822b9c4d5daf443ad07e167f7d0e2cc41dd18508802a6f32544a3d288408bc1"} err="failed to get container status \"6822b9c4d5daf443ad07e167f7d0e2cc41dd18508802a6f32544a3d288408bc1\": rpc error: code = NotFound desc = could not find container \"6822b9c4d5daf443ad07e167f7d0e2cc41dd18508802a6f32544a3d288408bc1\": container with ID starting with 6822b9c4d5daf443ad07e167f7d0e2cc41dd18508802a6f32544a3d288408bc1 not found: ID does not exist" Dec 12 16:14:59 crc kubenswrapper[4693]: I1212 16:14:59.613378 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:14:59 crc kubenswrapper[4693]: I1212 16:14:59.711668 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"870d139b-dca4-4055-acd5-6b264b9cf889","Type":"ContainerStarted","Data":"254f571686df56256997b8911120c0474ece6d2e0c11fa4b24eba51c7ba42d8f"} Dec 12 16:14:59 crc kubenswrapper[4693]: I1212 16:14:59.711765 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="870d139b-dca4-4055-acd5-6b264b9cf889" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://254f571686df56256997b8911120c0474ece6d2e0c11fa4b24eba51c7ba42d8f" gracePeriod=30 Dec 12 16:14:59 crc kubenswrapper[4693]: I1212 16:14:59.713854 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7179e70d-0681-41fa-ba91-298bd275b282","Type":"ContainerStarted","Data":"f3551060ac6700170c940ec8c451145d3d7c0cae099745e1a33cf8ab94466485"} Dec 12 16:14:59 crc kubenswrapper[4693]: I1212 16:14:59.716363 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38afd4c5-3e1b-4a5f-a82c-ac270a369417","Type":"ContainerStarted","Data":"ee64ad6b51d829d9d249993fa9d8030a5211340d4d46d25dc7ce726f329a56b9"} Dec 12 16:14:59 crc kubenswrapper[4693]: I1212 16:14:59.725043 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"72708ce9-a61f-447d-b58b-4f23ac302313","Type":"ContainerStarted","Data":"d0fd555ffc8fc85e6ef00db109b601d9822b1fa19a208805d30c57e2d6103f7f"} Dec 12 16:14:59 crc kubenswrapper[4693]: I1212 16:14:59.727518 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff9d8d2e-ddae-487e-b259-334e7df154d4","Type":"ContainerStarted","Data":"bf7ab1f648271fc982958c5c19e93dd9bcf793297545cd90f355a24cb06ecf9b"} Dec 12 16:14:59 crc kubenswrapper[4693]: I1212 16:14:59.732378 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.67097642 podStartE2EDuration="9.732359684s" podCreationTimestamp="2025-12-12 16:14:50 +0000 UTC" firstStartedPulling="2025-12-12 16:14:53.048223694 +0000 UTC m=+1720.216863295" lastFinishedPulling="2025-12-12 16:14:59.109606958 +0000 UTC m=+1726.278246559" observedRunningTime="2025-12-12 16:14:59.728214322 +0000 UTC m=+1726.896853923" watchObservedRunningTime="2025-12-12 16:14:59.732359684 +0000 UTC m=+1726.900999285" Dec 12 16:14:59 crc kubenswrapper[4693]: I1212 16:14:59.753290 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.677869447 podStartE2EDuration="9.753251129s" podCreationTimestamp="2025-12-12 16:14:50 +0000 UTC" firstStartedPulling="2025-12-12 16:14:53.084787323 +0000 UTC m=+1720.253426924" lastFinishedPulling="2025-12-12 16:14:59.160168995 +0000 UTC m=+1726.328808606" observedRunningTime="2025-12-12 16:14:59.750326379 +0000 UTC m=+1726.918965980" watchObservedRunningTime="2025-12-12 16:14:59.753251129 +0000 UTC m=+1726.921890730" Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.178181 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425935-6984f"] Dec 12 16:15:00 crc kubenswrapper[4693]: E1212 16:15:00.179194 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6250357e-de93-4024-acda-b1ebbf788eca" containerName="mariadb-database-create" Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.179597 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="6250357e-de93-4024-acda-b1ebbf788eca" containerName="mariadb-database-create" Dec 12 16:15:00 crc kubenswrapper[4693]: E1212 16:15:00.179661 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e9c57e3-dcab-45d3-9013-a7348e6d94ec" containerName="mariadb-account-create-update" Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.179710 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e9c57e3-dcab-45d3-9013-a7348e6d94ec" containerName="mariadb-account-create-update" Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.180107 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="6250357e-de93-4024-acda-b1ebbf788eca" containerName="mariadb-database-create" Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.185606 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e9c57e3-dcab-45d3-9013-a7348e6d94ec" containerName="mariadb-account-create-update" Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.186810 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-6984f" Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.189623 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.189811 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.197435 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425935-6984f"] Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.295651 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd1c81c4-e711-4356-a2a5-4647ae66ffb7-config-volume\") pod \"collect-profiles-29425935-6984f\" (UID: \"bd1c81c4-e711-4356-a2a5-4647ae66ffb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-6984f" Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.295709 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlzlj\" (UniqueName: \"kubernetes.io/projected/bd1c81c4-e711-4356-a2a5-4647ae66ffb7-kube-api-access-nlzlj\") pod \"collect-profiles-29425935-6984f\" (UID: \"bd1c81c4-e711-4356-a2a5-4647ae66ffb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-6984f" Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.295745 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd1c81c4-e711-4356-a2a5-4647ae66ffb7-secret-volume\") pod \"collect-profiles-29425935-6984f\" (UID: \"bd1c81c4-e711-4356-a2a5-4647ae66ffb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-6984f" Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.398886 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd1c81c4-e711-4356-a2a5-4647ae66ffb7-config-volume\") pod \"collect-profiles-29425935-6984f\" (UID: \"bd1c81c4-e711-4356-a2a5-4647ae66ffb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-6984f" Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.398940 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlzlj\" (UniqueName: \"kubernetes.io/projected/bd1c81c4-e711-4356-a2a5-4647ae66ffb7-kube-api-access-nlzlj\") pod \"collect-profiles-29425935-6984f\" (UID: \"bd1c81c4-e711-4356-a2a5-4647ae66ffb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-6984f" Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.398997 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd1c81c4-e711-4356-a2a5-4647ae66ffb7-secret-volume\") pod \"collect-profiles-29425935-6984f\" (UID: \"bd1c81c4-e711-4356-a2a5-4647ae66ffb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-6984f" Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.400182 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd1c81c4-e711-4356-a2a5-4647ae66ffb7-config-volume\") pod \"collect-profiles-29425935-6984f\" (UID: \"bd1c81c4-e711-4356-a2a5-4647ae66ffb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-6984f" Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.484966 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd1c81c4-e711-4356-a2a5-4647ae66ffb7-secret-volume\") pod \"collect-profiles-29425935-6984f\" (UID: \"bd1c81c4-e711-4356-a2a5-4647ae66ffb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-6984f" Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.489599 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlzlj\" (UniqueName: \"kubernetes.io/projected/bd1c81c4-e711-4356-a2a5-4647ae66ffb7-kube-api-access-nlzlj\") pod \"collect-profiles-29425935-6984f\" (UID: \"bd1c81c4-e711-4356-a2a5-4647ae66ffb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-6984f" Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.529942 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-6984f" Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.743855 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff9d8d2e-ddae-487e-b259-334e7df154d4","Type":"ContainerStarted","Data":"14b8237db29e568147b3955d6b33057a26fd11dee58e9c24a8a6765d48211746"} Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.750527 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"72708ce9-a61f-447d-b58b-4f23ac302313","Type":"ContainerStarted","Data":"7d11ff459a3cadaad394326c8f946bcdd08427e591debc811f2559cc6c5dc52e"} Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.750623 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="72708ce9-a61f-447d-b58b-4f23ac302313" containerName="nova-metadata-log" containerID="cri-o://d0fd555ffc8fc85e6ef00db109b601d9822b1fa19a208805d30c57e2d6103f7f" gracePeriod=30 Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.750713 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="72708ce9-a61f-447d-b58b-4f23ac302313" containerName="nova-metadata-metadata" containerID="cri-o://7d11ff459a3cadaad394326c8f946bcdd08427e591debc811f2559cc6c5dc52e" gracePeriod=30 Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.769441 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.801246116 podStartE2EDuration="10.7694065s" podCreationTimestamp="2025-12-12 16:14:50 +0000 UTC" firstStartedPulling="2025-12-12 16:14:53.190254264 +0000 UTC m=+1720.358893865" lastFinishedPulling="2025-12-12 16:14:59.158414648 +0000 UTC m=+1726.327054249" observedRunningTime="2025-12-12 16:15:00.764417945 +0000 UTC m=+1727.933057546" watchObservedRunningTime="2025-12-12 16:15:00.7694065 +0000 UTC m=+1727.938046151" Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.808657 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.595246847 podStartE2EDuration="10.80863681s" podCreationTimestamp="2025-12-12 16:14:50 +0000 UTC" firstStartedPulling="2025-12-12 16:14:52.945539578 +0000 UTC m=+1720.114179179" lastFinishedPulling="2025-12-12 16:14:59.158929541 +0000 UTC m=+1726.327569142" observedRunningTime="2025-12-12 16:15:00.791095896 +0000 UTC m=+1727.959735497" watchObservedRunningTime="2025-12-12 16:15:00.80863681 +0000 UTC m=+1727.977276411" Dec 12 16:15:00 crc kubenswrapper[4693]: I1212 16:15:00.910062 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.092571 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.092840 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.153135 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-s9j89"] Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.159038 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-s9j89" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.168244 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-jzr72" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.168497 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.168599 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.168704 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.192200 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-s9j89"] Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.231425 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ace206-3061-4478-910f-dcbbf46d5f72-config-data\") pod \"aodh-db-sync-s9j89\" (UID: \"50ace206-3061-4478-910f-dcbbf46d5f72\") " pod="openstack/aodh-db-sync-s9j89" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.231466 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b4xj\" (UniqueName: \"kubernetes.io/projected/50ace206-3061-4478-910f-dcbbf46d5f72-kube-api-access-6b4xj\") pod \"aodh-db-sync-s9j89\" (UID: \"50ace206-3061-4478-910f-dcbbf46d5f72\") " pod="openstack/aodh-db-sync-s9j89" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.231544 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ace206-3061-4478-910f-dcbbf46d5f72-combined-ca-bundle\") pod \"aodh-db-sync-s9j89\" (UID: \"50ace206-3061-4478-910f-dcbbf46d5f72\") " pod="openstack/aodh-db-sync-s9j89" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.231629 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50ace206-3061-4478-910f-dcbbf46d5f72-scripts\") pod \"aodh-db-sync-s9j89\" (UID: \"50ace206-3061-4478-910f-dcbbf46d5f72\") " pod="openstack/aodh-db-sync-s9j89" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.277119 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.277175 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.334946 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ace206-3061-4478-910f-dcbbf46d5f72-combined-ca-bundle\") pod \"aodh-db-sync-s9j89\" (UID: \"50ace206-3061-4478-910f-dcbbf46d5f72\") " pod="openstack/aodh-db-sync-s9j89" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.335169 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50ace206-3061-4478-910f-dcbbf46d5f72-scripts\") pod \"aodh-db-sync-s9j89\" (UID: \"50ace206-3061-4478-910f-dcbbf46d5f72\") " pod="openstack/aodh-db-sync-s9j89" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.335422 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ace206-3061-4478-910f-dcbbf46d5f72-config-data\") pod \"aodh-db-sync-s9j89\" (UID: \"50ace206-3061-4478-910f-dcbbf46d5f72\") " pod="openstack/aodh-db-sync-s9j89" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.335481 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b4xj\" (UniqueName: \"kubernetes.io/projected/50ace206-3061-4478-910f-dcbbf46d5f72-kube-api-access-6b4xj\") pod \"aodh-db-sync-s9j89\" (UID: \"50ace206-3061-4478-910f-dcbbf46d5f72\") " pod="openstack/aodh-db-sync-s9j89" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.344109 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ace206-3061-4478-910f-dcbbf46d5f72-combined-ca-bundle\") pod \"aodh-db-sync-s9j89\" (UID: \"50ace206-3061-4478-910f-dcbbf46d5f72\") " pod="openstack/aodh-db-sync-s9j89" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.349672 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50ace206-3061-4478-910f-dcbbf46d5f72-scripts\") pod \"aodh-db-sync-s9j89\" (UID: \"50ace206-3061-4478-910f-dcbbf46d5f72\") " pod="openstack/aodh-db-sync-s9j89" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.355007 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ace206-3061-4478-910f-dcbbf46d5f72-config-data\") pod \"aodh-db-sync-s9j89\" (UID: \"50ace206-3061-4478-910f-dcbbf46d5f72\") " pod="openstack/aodh-db-sync-s9j89" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.363984 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b4xj\" (UniqueName: \"kubernetes.io/projected/50ace206-3061-4478-910f-dcbbf46d5f72-kube-api-access-6b4xj\") pod \"aodh-db-sync-s9j89\" (UID: \"50ace206-3061-4478-910f-dcbbf46d5f72\") " pod="openstack/aodh-db-sync-s9j89" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.380922 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.380958 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.395493 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.521496 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-s9j89" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.529005 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425935-6984f"] Dec 12 16:15:01 crc kubenswrapper[4693]: W1212 16:15:01.548390 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd1c81c4_e711_4356_a2a5_4647ae66ffb7.slice/crio-e368fd5de5e17fe5fc8d212dcd01cd183b88eb70cf6965a95333f79d14ed7734 WatchSource:0}: Error finding container e368fd5de5e17fe5fc8d212dcd01cd183b88eb70cf6965a95333f79d14ed7734: Status 404 returned error can't find the container with id e368fd5de5e17fe5fc8d212dcd01cd183b88eb70cf6965a95333f79d14ed7734 Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.745418 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.829598 4693 generic.go:334] "Generic (PLEG): container finished" podID="72708ce9-a61f-447d-b58b-4f23ac302313" containerID="7d11ff459a3cadaad394326c8f946bcdd08427e591debc811f2559cc6c5dc52e" exitCode=0 Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.829629 4693 generic.go:334] "Generic (PLEG): container finished" podID="72708ce9-a61f-447d-b58b-4f23ac302313" containerID="d0fd555ffc8fc85e6ef00db109b601d9822b1fa19a208805d30c57e2d6103f7f" exitCode=143 Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.829695 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"72708ce9-a61f-447d-b58b-4f23ac302313","Type":"ContainerDied","Data":"7d11ff459a3cadaad394326c8f946bcdd08427e591debc811f2559cc6c5dc52e"} Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.829728 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"72708ce9-a61f-447d-b58b-4f23ac302313","Type":"ContainerDied","Data":"d0fd555ffc8fc85e6ef00db109b601d9822b1fa19a208805d30c57e2d6103f7f"} Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.861018 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-4fjxz"] Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.861266 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" podUID="2b9c8d52-1799-496d-9911-867479d89883" containerName="dnsmasq-dns" containerID="cri-o://b8e4a68e48bcfd6975299d00dc6e1e4ba81b7e28b96cf32504edf6bdb587243d" gracePeriod=10 Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.863791 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38afd4c5-3e1b-4a5f-a82c-ac270a369417","Type":"ContainerStarted","Data":"84c0935b6357be2256a4d54f7e475cadf80d8b3d5c98477ed87493e12be0c57c"} Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.866445 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-6984f" event={"ID":"bd1c81c4-e711-4356-a2a5-4647ae66ffb7","Type":"ContainerStarted","Data":"e368fd5de5e17fe5fc8d212dcd01cd183b88eb70cf6965a95333f79d14ed7734"} Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.924762 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-6984f" podStartSLOduration=1.924715252 podStartE2EDuration="1.924715252s" podCreationTimestamp="2025-12-12 16:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:15:01.903602821 +0000 UTC m=+1729.072242422" watchObservedRunningTime="2025-12-12 16:15:01.924715252 +0000 UTC m=+1729.093354853" Dec 12 16:15:01 crc kubenswrapper[4693]: I1212 16:15:01.983784 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.024261 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.083559 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72708ce9-a61f-447d-b58b-4f23ac302313-logs\") pod \"72708ce9-a61f-447d-b58b-4f23ac302313\" (UID: \"72708ce9-a61f-447d-b58b-4f23ac302313\") " Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.083722 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72708ce9-a61f-447d-b58b-4f23ac302313-combined-ca-bundle\") pod \"72708ce9-a61f-447d-b58b-4f23ac302313\" (UID: \"72708ce9-a61f-447d-b58b-4f23ac302313\") " Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.083874 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72708ce9-a61f-447d-b58b-4f23ac302313-config-data\") pod \"72708ce9-a61f-447d-b58b-4f23ac302313\" (UID: \"72708ce9-a61f-447d-b58b-4f23ac302313\") " Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.083940 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp9pv\" (UniqueName: \"kubernetes.io/projected/72708ce9-a61f-447d-b58b-4f23ac302313-kube-api-access-cp9pv\") pod \"72708ce9-a61f-447d-b58b-4f23ac302313\" (UID: \"72708ce9-a61f-447d-b58b-4f23ac302313\") " Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.084518 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72708ce9-a61f-447d-b58b-4f23ac302313-logs" (OuterVolumeSpecName: "logs") pod "72708ce9-a61f-447d-b58b-4f23ac302313" (UID: "72708ce9-a61f-447d-b58b-4f23ac302313"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.085535 4693 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72708ce9-a61f-447d-b58b-4f23ac302313-logs\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.095419 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72708ce9-a61f-447d-b58b-4f23ac302313-kube-api-access-cp9pv" (OuterVolumeSpecName: "kube-api-access-cp9pv") pod "72708ce9-a61f-447d-b58b-4f23ac302313" (UID: "72708ce9-a61f-447d-b58b-4f23ac302313"). InnerVolumeSpecName "kube-api-access-cp9pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.178699 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ff9d8d2e-ddae-487e-b259-334e7df154d4" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.232:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.179023 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ff9d8d2e-ddae-487e-b259-334e7df154d4" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.232:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.189237 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp9pv\" (UniqueName: \"kubernetes.io/projected/72708ce9-a61f-447d-b58b-4f23ac302313-kube-api-access-cp9pv\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.232599 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72708ce9-a61f-447d-b58b-4f23ac302313-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72708ce9-a61f-447d-b58b-4f23ac302313" (UID: "72708ce9-a61f-447d-b58b-4f23ac302313"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.273266 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-s9j89"] Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.292632 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72708ce9-a61f-447d-b58b-4f23ac302313-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.329492 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72708ce9-a61f-447d-b58b-4f23ac302313-config-data" (OuterVolumeSpecName: "config-data") pod "72708ce9-a61f-447d-b58b-4f23ac302313" (UID: "72708ce9-a61f-447d-b58b-4f23ac302313"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.396576 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72708ce9-a61f-447d-b58b-4f23ac302313-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.672257 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.824473 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f4gf\" (UniqueName: \"kubernetes.io/projected/2b9c8d52-1799-496d-9911-867479d89883-kube-api-access-8f4gf\") pod \"2b9c8d52-1799-496d-9911-867479d89883\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.824575 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-ovsdbserver-sb\") pod \"2b9c8d52-1799-496d-9911-867479d89883\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.824697 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-dns-svc\") pod \"2b9c8d52-1799-496d-9911-867479d89883\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.824817 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-dns-swift-storage-0\") pod \"2b9c8d52-1799-496d-9911-867479d89883\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.824899 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-config\") pod \"2b9c8d52-1799-496d-9911-867479d89883\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.824961 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-ovsdbserver-nb\") pod \"2b9c8d52-1799-496d-9911-867479d89883\" (UID: \"2b9c8d52-1799-496d-9911-867479d89883\") " Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.845547 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b9c8d52-1799-496d-9911-867479d89883-kube-api-access-8f4gf" (OuterVolumeSpecName: "kube-api-access-8f4gf") pod "2b9c8d52-1799-496d-9911-867479d89883" (UID: "2b9c8d52-1799-496d-9911-867479d89883"). InnerVolumeSpecName "kube-api-access-8f4gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.899144 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2b9c8d52-1799-496d-9911-867479d89883" (UID: "2b9c8d52-1799-496d-9911-867479d89883"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.912810 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2b9c8d52-1799-496d-9911-867479d89883" (UID: "2b9c8d52-1799-496d-9911-867479d89883"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.937704 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2b9c8d52-1799-496d-9911-867479d89883" (UID: "2b9c8d52-1799-496d-9911-867479d89883"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.948571 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.948596 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"72708ce9-a61f-447d-b58b-4f23ac302313","Type":"ContainerDied","Data":"6af90412695f790f1cad218df786b749d48e701c8b491779f43f9f8e3b4b0e96"} Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.948664 4693 scope.go:117] "RemoveContainer" containerID="7d11ff459a3cadaad394326c8f946bcdd08427e591debc811f2559cc6c5dc52e" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.962633 4693 generic.go:334] "Generic (PLEG): container finished" podID="2b9c8d52-1799-496d-9911-867479d89883" containerID="b8e4a68e48bcfd6975299d00dc6e1e4ba81b7e28b96cf32504edf6bdb587243d" exitCode=0 Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.962725 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" event={"ID":"2b9c8d52-1799-496d-9911-867479d89883","Type":"ContainerDied","Data":"b8e4a68e48bcfd6975299d00dc6e1e4ba81b7e28b96cf32504edf6bdb587243d"} Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.962756 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" event={"ID":"2b9c8d52-1799-496d-9911-867479d89883","Type":"ContainerDied","Data":"b4b82468a89598a6399ef5d319bf7cc6e62421165a292125f7a072e5bb2d0a8a"} Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.962869 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-4fjxz" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.967225 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-s9j89" event={"ID":"50ace206-3061-4478-910f-dcbbf46d5f72","Type":"ContainerStarted","Data":"c254931a51ec3ecc56961163325db68889ddb26819b204a55ce47710f555cc45"} Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.974068 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-config" (OuterVolumeSpecName: "config") pod "2b9c8d52-1799-496d-9911-867479d89883" (UID: "2b9c8d52-1799-496d-9911-867479d89883"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.979793 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38afd4c5-3e1b-4a5f-a82c-ac270a369417","Type":"ContainerStarted","Data":"c57fc982e7da26280b4da3a615702607dad9ceb5264526dd38d48c56ca0a9533"} Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.985561 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.986170 4693 generic.go:334] "Generic (PLEG): container finished" podID="bd1c81c4-e711-4356-a2a5-4647ae66ffb7" containerID="7b05f03eb1249f7e9f7c388c7d8be1b77a7c101461537a568b82c5a49ee69da8" exitCode=0 Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.988403 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-6984f" event={"ID":"bd1c81c4-e711-4356-a2a5-4647ae66ffb7","Type":"ContainerDied","Data":"7b05f03eb1249f7e9f7c388c7d8be1b77a7c101461537a568b82c5a49ee69da8"} Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.988514 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f4gf\" (UniqueName: \"kubernetes.io/projected/2b9c8d52-1799-496d-9911-867479d89883-kube-api-access-8f4gf\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.990868 4693 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.991757 4693 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:02 crc kubenswrapper[4693]: I1212 16:15:02.993411 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2b9c8d52-1799-496d-9911-867479d89883" (UID: "2b9c8d52-1799-496d-9911-867479d89883"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.009456 4693 scope.go:117] "RemoveContainer" containerID="d0fd555ffc8fc85e6ef00db109b601d9822b1fa19a208805d30c57e2d6103f7f" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.061736 4693 scope.go:117] "RemoveContainer" containerID="b8e4a68e48bcfd6975299d00dc6e1e4ba81b7e28b96cf32504edf6bdb587243d" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.091038 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.099466 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.101852 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b9c8d52-1799-496d-9911-867479d89883-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.140682 4693 scope.go:117] "RemoveContainer" containerID="59e229394981603fc0305f84b89d1074f1a0ef60906a9beeefd5c630d799dbce" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.192012 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.218395 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 12 16:15:03 crc kubenswrapper[4693]: E1212 16:15:03.218914 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b9c8d52-1799-496d-9911-867479d89883" containerName="init" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.218924 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9c8d52-1799-496d-9911-867479d89883" containerName="init" Dec 12 16:15:03 crc kubenswrapper[4693]: E1212 16:15:03.218937 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72708ce9-a61f-447d-b58b-4f23ac302313" containerName="nova-metadata-log" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.218943 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="72708ce9-a61f-447d-b58b-4f23ac302313" containerName="nova-metadata-log" Dec 12 16:15:03 crc kubenswrapper[4693]: E1212 16:15:03.218990 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b9c8d52-1799-496d-9911-867479d89883" containerName="dnsmasq-dns" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.218996 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9c8d52-1799-496d-9911-867479d89883" containerName="dnsmasq-dns" Dec 12 16:15:03 crc kubenswrapper[4693]: E1212 16:15:03.219013 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72708ce9-a61f-447d-b58b-4f23ac302313" containerName="nova-metadata-metadata" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.219018 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="72708ce9-a61f-447d-b58b-4f23ac302313" containerName="nova-metadata-metadata" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.219257 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b9c8d52-1799-496d-9911-867479d89883" containerName="dnsmasq-dns" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.219291 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="72708ce9-a61f-447d-b58b-4f23ac302313" containerName="nova-metadata-log" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.219302 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="72708ce9-a61f-447d-b58b-4f23ac302313" containerName="nova-metadata-metadata" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.220812 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.223826 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.224021 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.231378 4693 scope.go:117] "RemoveContainer" containerID="b8e4a68e48bcfd6975299d00dc6e1e4ba81b7e28b96cf32504edf6bdb587243d" Dec 12 16:15:03 crc kubenswrapper[4693]: E1212 16:15:03.231731 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8e4a68e48bcfd6975299d00dc6e1e4ba81b7e28b96cf32504edf6bdb587243d\": container with ID starting with b8e4a68e48bcfd6975299d00dc6e1e4ba81b7e28b96cf32504edf6bdb587243d not found: ID does not exist" containerID="b8e4a68e48bcfd6975299d00dc6e1e4ba81b7e28b96cf32504edf6bdb587243d" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.231761 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8e4a68e48bcfd6975299d00dc6e1e4ba81b7e28b96cf32504edf6bdb587243d"} err="failed to get container status \"b8e4a68e48bcfd6975299d00dc6e1e4ba81b7e28b96cf32504edf6bdb587243d\": rpc error: code = NotFound desc = could not find container \"b8e4a68e48bcfd6975299d00dc6e1e4ba81b7e28b96cf32504edf6bdb587243d\": container with ID starting with b8e4a68e48bcfd6975299d00dc6e1e4ba81b7e28b96cf32504edf6bdb587243d not found: ID does not exist" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.231791 4693 scope.go:117] "RemoveContainer" containerID="59e229394981603fc0305f84b89d1074f1a0ef60906a9beeefd5c630d799dbce" Dec 12 16:15:03 crc kubenswrapper[4693]: E1212 16:15:03.235950 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59e229394981603fc0305f84b89d1074f1a0ef60906a9beeefd5c630d799dbce\": container with ID starting with 59e229394981603fc0305f84b89d1074f1a0ef60906a9beeefd5c630d799dbce not found: ID does not exist" containerID="59e229394981603fc0305f84b89d1074f1a0ef60906a9beeefd5c630d799dbce" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.236088 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59e229394981603fc0305f84b89d1074f1a0ef60906a9beeefd5c630d799dbce"} err="failed to get container status \"59e229394981603fc0305f84b89d1074f1a0ef60906a9beeefd5c630d799dbce\": rpc error: code = NotFound desc = could not find container \"59e229394981603fc0305f84b89d1074f1a0ef60906a9beeefd5c630d799dbce\": container with ID starting with 59e229394981603fc0305f84b89d1074f1a0ef60906a9beeefd5c630d799dbce not found: ID does not exist" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.239521 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.432598 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c968a82-7344-4fe7-8b88-f24162ff13df-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0c968a82-7344-4fe7-8b88-f24162ff13df\") " pod="openstack/nova-metadata-0" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.432652 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvlnq\" (UniqueName: \"kubernetes.io/projected/0c968a82-7344-4fe7-8b88-f24162ff13df-kube-api-access-wvlnq\") pod \"nova-metadata-0\" (UID: \"0c968a82-7344-4fe7-8b88-f24162ff13df\") " pod="openstack/nova-metadata-0" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.432759 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c968a82-7344-4fe7-8b88-f24162ff13df-logs\") pod \"nova-metadata-0\" (UID: \"0c968a82-7344-4fe7-8b88-f24162ff13df\") " pod="openstack/nova-metadata-0" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.432841 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c968a82-7344-4fe7-8b88-f24162ff13df-config-data\") pod \"nova-metadata-0\" (UID: \"0c968a82-7344-4fe7-8b88-f24162ff13df\") " pod="openstack/nova-metadata-0" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.432917 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c968a82-7344-4fe7-8b88-f24162ff13df-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0c968a82-7344-4fe7-8b88-f24162ff13df\") " pod="openstack/nova-metadata-0" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.434557 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72708ce9-a61f-447d-b58b-4f23ac302313" path="/var/lib/kubelet/pods/72708ce9-a61f-447d-b58b-4f23ac302313/volumes" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.554987 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c968a82-7344-4fe7-8b88-f24162ff13df-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0c968a82-7344-4fe7-8b88-f24162ff13df\") " pod="openstack/nova-metadata-0" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.555072 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvlnq\" (UniqueName: \"kubernetes.io/projected/0c968a82-7344-4fe7-8b88-f24162ff13df-kube-api-access-wvlnq\") pod \"nova-metadata-0\" (UID: \"0c968a82-7344-4fe7-8b88-f24162ff13df\") " pod="openstack/nova-metadata-0" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.555210 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c968a82-7344-4fe7-8b88-f24162ff13df-logs\") pod \"nova-metadata-0\" (UID: \"0c968a82-7344-4fe7-8b88-f24162ff13df\") " pod="openstack/nova-metadata-0" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.555348 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c968a82-7344-4fe7-8b88-f24162ff13df-config-data\") pod \"nova-metadata-0\" (UID: \"0c968a82-7344-4fe7-8b88-f24162ff13df\") " pod="openstack/nova-metadata-0" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.555431 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c968a82-7344-4fe7-8b88-f24162ff13df-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0c968a82-7344-4fe7-8b88-f24162ff13df\") " pod="openstack/nova-metadata-0" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.556289 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c968a82-7344-4fe7-8b88-f24162ff13df-logs\") pod \"nova-metadata-0\" (UID: \"0c968a82-7344-4fe7-8b88-f24162ff13df\") " pod="openstack/nova-metadata-0" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.573122 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c968a82-7344-4fe7-8b88-f24162ff13df-config-data\") pod \"nova-metadata-0\" (UID: \"0c968a82-7344-4fe7-8b88-f24162ff13df\") " pod="openstack/nova-metadata-0" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.585805 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c968a82-7344-4fe7-8b88-f24162ff13df-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0c968a82-7344-4fe7-8b88-f24162ff13df\") " pod="openstack/nova-metadata-0" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.590347 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-4fjxz"] Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.591007 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c968a82-7344-4fe7-8b88-f24162ff13df-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0c968a82-7344-4fe7-8b88-f24162ff13df\") " pod="openstack/nova-metadata-0" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.607164 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvlnq\" (UniqueName: \"kubernetes.io/projected/0c968a82-7344-4fe7-8b88-f24162ff13df-kube-api-access-wvlnq\") pod \"nova-metadata-0\" (UID: \"0c968a82-7344-4fe7-8b88-f24162ff13df\") " pod="openstack/nova-metadata-0" Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.622727 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-4fjxz"] Dec 12 16:15:03 crc kubenswrapper[4693]: I1212 16:15:03.885073 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 16:15:04 crc kubenswrapper[4693]: I1212 16:15:04.055227 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38afd4c5-3e1b-4a5f-a82c-ac270a369417","Type":"ContainerStarted","Data":"c10be96a768cad36cf79bc275c59b1767b7236beb6632bce964d2dc57a12150a"} Dec 12 16:15:04 crc kubenswrapper[4693]: I1212 16:15:04.751048 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 16:15:04 crc kubenswrapper[4693]: I1212 16:15:04.780928 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-6984f" Dec 12 16:15:04 crc kubenswrapper[4693]: W1212 16:15:04.799577 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c968a82_7344_4fe7_8b88_f24162ff13df.slice/crio-c44e148c774d7329834f9934016b16c9b2390f3a23f1ddd59c1f2789d9b79eec WatchSource:0}: Error finding container c44e148c774d7329834f9934016b16c9b2390f3a23f1ddd59c1f2789d9b79eec: Status 404 returned error can't find the container with id c44e148c774d7329834f9934016b16c9b2390f3a23f1ddd59c1f2789d9b79eec Dec 12 16:15:04 crc kubenswrapper[4693]: I1212 16:15:04.809992 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlzlj\" (UniqueName: \"kubernetes.io/projected/bd1c81c4-e711-4356-a2a5-4647ae66ffb7-kube-api-access-nlzlj\") pod \"bd1c81c4-e711-4356-a2a5-4647ae66ffb7\" (UID: \"bd1c81c4-e711-4356-a2a5-4647ae66ffb7\") " Dec 12 16:15:04 crc kubenswrapper[4693]: I1212 16:15:04.810113 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd1c81c4-e711-4356-a2a5-4647ae66ffb7-secret-volume\") pod \"bd1c81c4-e711-4356-a2a5-4647ae66ffb7\" (UID: \"bd1c81c4-e711-4356-a2a5-4647ae66ffb7\") " Dec 12 16:15:04 crc kubenswrapper[4693]: I1212 16:15:04.810694 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd1c81c4-e711-4356-a2a5-4647ae66ffb7-config-volume\") pod \"bd1c81c4-e711-4356-a2a5-4647ae66ffb7\" (UID: \"bd1c81c4-e711-4356-a2a5-4647ae66ffb7\") " Dec 12 16:15:04 crc kubenswrapper[4693]: I1212 16:15:04.817795 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd1c81c4-e711-4356-a2a5-4647ae66ffb7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bd1c81c4-e711-4356-a2a5-4647ae66ffb7" (UID: "bd1c81c4-e711-4356-a2a5-4647ae66ffb7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:04 crc kubenswrapper[4693]: I1212 16:15:04.818694 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd1c81c4-e711-4356-a2a5-4647ae66ffb7-config-volume" (OuterVolumeSpecName: "config-volume") pod "bd1c81c4-e711-4356-a2a5-4647ae66ffb7" (UID: "bd1c81c4-e711-4356-a2a5-4647ae66ffb7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:15:04 crc kubenswrapper[4693]: I1212 16:15:04.821596 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd1c81c4-e711-4356-a2a5-4647ae66ffb7-kube-api-access-nlzlj" (OuterVolumeSpecName: "kube-api-access-nlzlj") pod "bd1c81c4-e711-4356-a2a5-4647ae66ffb7" (UID: "bd1c81c4-e711-4356-a2a5-4647ae66ffb7"). InnerVolumeSpecName "kube-api-access-nlzlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:15:04 crc kubenswrapper[4693]: I1212 16:15:04.825102 4693 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd1c81c4-e711-4356-a2a5-4647ae66ffb7-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:04 crc kubenswrapper[4693]: I1212 16:15:04.825148 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlzlj\" (UniqueName: \"kubernetes.io/projected/bd1c81c4-e711-4356-a2a5-4647ae66ffb7-kube-api-access-nlzlj\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:04 crc kubenswrapper[4693]: I1212 16:15:04.825161 4693 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd1c81c4-e711-4356-a2a5-4647ae66ffb7-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:05 crc kubenswrapper[4693]: I1212 16:15:05.072673 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-6984f" event={"ID":"bd1c81c4-e711-4356-a2a5-4647ae66ffb7","Type":"ContainerDied","Data":"e368fd5de5e17fe5fc8d212dcd01cd183b88eb70cf6965a95333f79d14ed7734"} Dec 12 16:15:05 crc kubenswrapper[4693]: I1212 16:15:05.073004 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e368fd5de5e17fe5fc8d212dcd01cd183b88eb70cf6965a95333f79d14ed7734" Dec 12 16:15:05 crc kubenswrapper[4693]: I1212 16:15:05.073067 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-6984f" Dec 12 16:15:05 crc kubenswrapper[4693]: I1212 16:15:05.087545 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c968a82-7344-4fe7-8b88-f24162ff13df","Type":"ContainerStarted","Data":"c44e148c774d7329834f9934016b16c9b2390f3a23f1ddd59c1f2789d9b79eec"} Dec 12 16:15:05 crc kubenswrapper[4693]: I1212 16:15:05.380329 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b9c8d52-1799-496d-9911-867479d89883" path="/var/lib/kubelet/pods/2b9c8d52-1799-496d-9911-867479d89883/volumes" Dec 12 16:15:06 crc kubenswrapper[4693]: I1212 16:15:06.104485 4693 generic.go:334] "Generic (PLEG): container finished" podID="47fe2ff8-dc3c-4e98-8502-da788efb57aa" containerID="3881cbaeb54bb969750ad936ba7a54cc0d72415c4fa3823a04ea5c3eb32f205d" exitCode=0 Dec 12 16:15:06 crc kubenswrapper[4693]: I1212 16:15:06.104591 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jm5sn" event={"ID":"47fe2ff8-dc3c-4e98-8502-da788efb57aa","Type":"ContainerDied","Data":"3881cbaeb54bb969750ad936ba7a54cc0d72415c4fa3823a04ea5c3eb32f205d"} Dec 12 16:15:06 crc kubenswrapper[4693]: I1212 16:15:06.119687 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38afd4c5-3e1b-4a5f-a82c-ac270a369417","Type":"ContainerStarted","Data":"cc3b7ab3806a0bd3be79ce34cf1cb76dfd023a66c40be7b4fa8fffb2f7519fb7"} Dec 12 16:15:06 crc kubenswrapper[4693]: I1212 16:15:06.120005 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 12 16:15:06 crc kubenswrapper[4693]: I1212 16:15:06.125445 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c968a82-7344-4fe7-8b88-f24162ff13df","Type":"ContainerStarted","Data":"f3317123e4fbaa1691af61d8c4f14d5b79b3f346cd0d7ca0a7139f24df159dbd"} Dec 12 16:15:06 crc kubenswrapper[4693]: I1212 16:15:06.125501 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c968a82-7344-4fe7-8b88-f24162ff13df","Type":"ContainerStarted","Data":"bef370059b2d9efb625aaa415c6159e803bd0d354eb58d1973a74b9e682ad684"} Dec 12 16:15:06 crc kubenswrapper[4693]: I1212 16:15:06.132634 4693 generic.go:334] "Generic (PLEG): container finished" podID="0f211a4d-5e81-46e3-a31f-8f359ba8da6f" containerID="6bca49c660e88075753b269ec3c652bb4f3ec69effaa3def39b1bd26971e6ae8" exitCode=0 Dec 12 16:15:06 crc kubenswrapper[4693]: I1212 16:15:06.132731 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mp8v7" event={"ID":"0f211a4d-5e81-46e3-a31f-8f359ba8da6f","Type":"ContainerDied","Data":"6bca49c660e88075753b269ec3c652bb4f3ec69effaa3def39b1bd26971e6ae8"} Dec 12 16:15:06 crc kubenswrapper[4693]: I1212 16:15:06.157006 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.156975717 podStartE2EDuration="3.156975717s" podCreationTimestamp="2025-12-12 16:15:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:15:06.149114955 +0000 UTC m=+1733.317754636" watchObservedRunningTime="2025-12-12 16:15:06.156975717 +0000 UTC m=+1733.325615328" Dec 12 16:15:06 crc kubenswrapper[4693]: I1212 16:15:06.179670 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.578646113 podStartE2EDuration="10.17964907s" podCreationTimestamp="2025-12-12 16:14:56 +0000 UTC" firstStartedPulling="2025-12-12 16:14:59.643689067 +0000 UTC m=+1726.812328668" lastFinishedPulling="2025-12-12 16:15:05.244692024 +0000 UTC m=+1732.413331625" observedRunningTime="2025-12-12 16:15:06.170592445 +0000 UTC m=+1733.339232046" watchObservedRunningTime="2025-12-12 16:15:06.17964907 +0000 UTC m=+1733.348288661" Dec 12 16:15:08 crc kubenswrapper[4693]: I1212 16:15:08.886614 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 12 16:15:08 crc kubenswrapper[4693]: I1212 16:15:08.887223 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 12 16:15:09 crc kubenswrapper[4693]: I1212 16:15:09.365951 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:15:09 crc kubenswrapper[4693]: E1212 16:15:09.366250 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:15:09 crc kubenswrapper[4693]: I1212 16:15:09.919626 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mp8v7" Dec 12 16:15:09 crc kubenswrapper[4693]: I1212 16:15:09.929146 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jm5sn" Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.009729 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-config-data\") pod \"0f211a4d-5e81-46e3-a31f-8f359ba8da6f\" (UID: \"0f211a4d-5e81-46e3-a31f-8f359ba8da6f\") " Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.009782 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-combined-ca-bundle\") pod \"0f211a4d-5e81-46e3-a31f-8f359ba8da6f\" (UID: \"0f211a4d-5e81-46e3-a31f-8f359ba8da6f\") " Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.009812 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gck5k\" (UniqueName: \"kubernetes.io/projected/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-kube-api-access-gck5k\") pod \"0f211a4d-5e81-46e3-a31f-8f359ba8da6f\" (UID: \"0f211a4d-5e81-46e3-a31f-8f359ba8da6f\") " Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.010042 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-scripts\") pod \"0f211a4d-5e81-46e3-a31f-8f359ba8da6f\" (UID: \"0f211a4d-5e81-46e3-a31f-8f359ba8da6f\") " Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.010078 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brrkh\" (UniqueName: \"kubernetes.io/projected/47fe2ff8-dc3c-4e98-8502-da788efb57aa-kube-api-access-brrkh\") pod \"47fe2ff8-dc3c-4e98-8502-da788efb57aa\" (UID: \"47fe2ff8-dc3c-4e98-8502-da788efb57aa\") " Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.010131 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47fe2ff8-dc3c-4e98-8502-da788efb57aa-config-data\") pod \"47fe2ff8-dc3c-4e98-8502-da788efb57aa\" (UID: \"47fe2ff8-dc3c-4e98-8502-da788efb57aa\") " Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.010162 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47fe2ff8-dc3c-4e98-8502-da788efb57aa-scripts\") pod \"47fe2ff8-dc3c-4e98-8502-da788efb57aa\" (UID: \"47fe2ff8-dc3c-4e98-8502-da788efb57aa\") " Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.010191 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47fe2ff8-dc3c-4e98-8502-da788efb57aa-combined-ca-bundle\") pod \"47fe2ff8-dc3c-4e98-8502-da788efb57aa\" (UID: \"47fe2ff8-dc3c-4e98-8502-da788efb57aa\") " Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.015258 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-kube-api-access-gck5k" (OuterVolumeSpecName: "kube-api-access-gck5k") pod "0f211a4d-5e81-46e3-a31f-8f359ba8da6f" (UID: "0f211a4d-5e81-46e3-a31f-8f359ba8da6f"). InnerVolumeSpecName "kube-api-access-gck5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.021870 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47fe2ff8-dc3c-4e98-8502-da788efb57aa-scripts" (OuterVolumeSpecName: "scripts") pod "47fe2ff8-dc3c-4e98-8502-da788efb57aa" (UID: "47fe2ff8-dc3c-4e98-8502-da788efb57aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.022485 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47fe2ff8-dc3c-4e98-8502-da788efb57aa-kube-api-access-brrkh" (OuterVolumeSpecName: "kube-api-access-brrkh") pod "47fe2ff8-dc3c-4e98-8502-da788efb57aa" (UID: "47fe2ff8-dc3c-4e98-8502-da788efb57aa"). InnerVolumeSpecName "kube-api-access-brrkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.040193 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-scripts" (OuterVolumeSpecName: "scripts") pod "0f211a4d-5e81-46e3-a31f-8f359ba8da6f" (UID: "0f211a4d-5e81-46e3-a31f-8f359ba8da6f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.069851 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47fe2ff8-dc3c-4e98-8502-da788efb57aa-config-data" (OuterVolumeSpecName: "config-data") pod "47fe2ff8-dc3c-4e98-8502-da788efb57aa" (UID: "47fe2ff8-dc3c-4e98-8502-da788efb57aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.075404 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47fe2ff8-dc3c-4e98-8502-da788efb57aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47fe2ff8-dc3c-4e98-8502-da788efb57aa" (UID: "47fe2ff8-dc3c-4e98-8502-da788efb57aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.087633 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f211a4d-5e81-46e3-a31f-8f359ba8da6f" (UID: "0f211a4d-5e81-46e3-a31f-8f359ba8da6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.108502 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-config-data" (OuterVolumeSpecName: "config-data") pod "0f211a4d-5e81-46e3-a31f-8f359ba8da6f" (UID: "0f211a4d-5e81-46e3-a31f-8f359ba8da6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.113185 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.113222 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brrkh\" (UniqueName: \"kubernetes.io/projected/47fe2ff8-dc3c-4e98-8502-da788efb57aa-kube-api-access-brrkh\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.113234 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47fe2ff8-dc3c-4e98-8502-da788efb57aa-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.113243 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47fe2ff8-dc3c-4e98-8502-da788efb57aa-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.113254 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47fe2ff8-dc3c-4e98-8502-da788efb57aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.113288 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.113301 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.113312 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gck5k\" (UniqueName: \"kubernetes.io/projected/0f211a4d-5e81-46e3-a31f-8f359ba8da6f-kube-api-access-gck5k\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.190055 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jm5sn" Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.190340 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jm5sn" event={"ID":"47fe2ff8-dc3c-4e98-8502-da788efb57aa","Type":"ContainerDied","Data":"858ab4874a9577644ff1aeb321006ceaeb543acc559b297a19ad33398e2fed09"} Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.190396 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="858ab4874a9577644ff1aeb321006ceaeb543acc559b297a19ad33398e2fed09" Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.192471 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mp8v7" event={"ID":"0f211a4d-5e81-46e3-a31f-8f359ba8da6f","Type":"ContainerDied","Data":"77a46cf73747b12af6dd88a3a8cec4e7d287da738ceb737e9863dafef334d10f"} Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.192500 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77a46cf73747b12af6dd88a3a8cec4e7d287da738ceb737e9863dafef334d10f" Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.192561 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mp8v7" Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.195184 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-s9j89" event={"ID":"50ace206-3061-4478-910f-dcbbf46d5f72","Type":"ContainerStarted","Data":"a1a13a252b004359b32de5d2196620bc01615530155c96ef54747c3eb3aeba67"} Dec 12 16:15:10 crc kubenswrapper[4693]: I1212 16:15:10.238517 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-s9j89" podStartSLOduration=1.897285835 podStartE2EDuration="9.238494158s" podCreationTimestamp="2025-12-12 16:15:01 +0000 UTC" firstStartedPulling="2025-12-12 16:15:02.329476794 +0000 UTC m=+1729.498116395" lastFinishedPulling="2025-12-12 16:15:09.670685117 +0000 UTC m=+1736.839324718" observedRunningTime="2025-12-12 16:15:10.220907662 +0000 UTC m=+1737.389547263" watchObservedRunningTime="2025-12-12 16:15:10.238494158 +0000 UTC m=+1737.407133759" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.048758 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 12 16:15:11 crc kubenswrapper[4693]: E1212 16:15:11.049677 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f211a4d-5e81-46e3-a31f-8f359ba8da6f" containerName="nova-cell1-conductor-db-sync" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.049695 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f211a4d-5e81-46e3-a31f-8f359ba8da6f" containerName="nova-cell1-conductor-db-sync" Dec 12 16:15:11 crc kubenswrapper[4693]: E1212 16:15:11.049742 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd1c81c4-e711-4356-a2a5-4647ae66ffb7" containerName="collect-profiles" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.049748 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd1c81c4-e711-4356-a2a5-4647ae66ffb7" containerName="collect-profiles" Dec 12 16:15:11 crc kubenswrapper[4693]: E1212 16:15:11.049768 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47fe2ff8-dc3c-4e98-8502-da788efb57aa" containerName="nova-manage" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.049775 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="47fe2ff8-dc3c-4e98-8502-da788efb57aa" containerName="nova-manage" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.049985 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f211a4d-5e81-46e3-a31f-8f359ba8da6f" containerName="nova-cell1-conductor-db-sync" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.050000 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd1c81c4-e711-4356-a2a5-4647ae66ffb7" containerName="collect-profiles" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.050021 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="47fe2ff8-dc3c-4e98-8502-da788efb57aa" containerName="nova-manage" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.050854 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.053816 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.080418 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.108005 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.109095 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.122201 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.135668 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.147607 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce3e1b2e-09c5-4643-bc6e-6e1218117a13-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ce3e1b2e-09c5-4643-bc6e-6e1218117a13\") " pod="openstack/nova-cell1-conductor-0" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.147832 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-224q8\" (UniqueName: \"kubernetes.io/projected/ce3e1b2e-09c5-4643-bc6e-6e1218117a13-kube-api-access-224q8\") pod \"nova-cell1-conductor-0\" (UID: \"ce3e1b2e-09c5-4643-bc6e-6e1218117a13\") " pod="openstack/nova-cell1-conductor-0" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.147997 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce3e1b2e-09c5-4643-bc6e-6e1218117a13-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ce3e1b2e-09c5-4643-bc6e-6e1218117a13\") " pod="openstack/nova-cell1-conductor-0" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.241581 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.244660 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.253006 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.255681 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce3e1b2e-09c5-4643-bc6e-6e1218117a13-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ce3e1b2e-09c5-4643-bc6e-6e1218117a13\") " pod="openstack/nova-cell1-conductor-0" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.255859 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-224q8\" (UniqueName: \"kubernetes.io/projected/ce3e1b2e-09c5-4643-bc6e-6e1218117a13-kube-api-access-224q8\") pod \"nova-cell1-conductor-0\" (UID: \"ce3e1b2e-09c5-4643-bc6e-6e1218117a13\") " pod="openstack/nova-cell1-conductor-0" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.256072 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce3e1b2e-09c5-4643-bc6e-6e1218117a13-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ce3e1b2e-09c5-4643-bc6e-6e1218117a13\") " pod="openstack/nova-cell1-conductor-0" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.261285 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce3e1b2e-09c5-4643-bc6e-6e1218117a13-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ce3e1b2e-09c5-4643-bc6e-6e1218117a13\") " pod="openstack/nova-cell1-conductor-0" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.264555 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce3e1b2e-09c5-4643-bc6e-6e1218117a13-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ce3e1b2e-09c5-4643-bc6e-6e1218117a13\") " pod="openstack/nova-cell1-conductor-0" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.274785 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.275041 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7179e70d-0681-41fa-ba91-298bd275b282" containerName="nova-scheduler-scheduler" containerID="cri-o://f3551060ac6700170c940ec8c451145d3d7c0cae099745e1a33cf8ab94466485" gracePeriod=30 Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.280792 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-224q8\" (UniqueName: \"kubernetes.io/projected/ce3e1b2e-09c5-4643-bc6e-6e1218117a13-kube-api-access-224q8\") pod \"nova-cell1-conductor-0\" (UID: \"ce3e1b2e-09c5-4643-bc6e-6e1218117a13\") " pod="openstack/nova-cell1-conductor-0" Dec 12 16:15:11 crc kubenswrapper[4693]: E1212 16:15:11.286141 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3551060ac6700170c940ec8c451145d3d7c0cae099745e1a33cf8ab94466485" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 12 16:15:11 crc kubenswrapper[4693]: E1212 16:15:11.290409 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3551060ac6700170c940ec8c451145d3d7c0cae099745e1a33cf8ab94466485" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 12 16:15:11 crc kubenswrapper[4693]: E1212 16:15:11.294153 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3551060ac6700170c940ec8c451145d3d7c0cae099745e1a33cf8ab94466485" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 12 16:15:11 crc kubenswrapper[4693]: E1212 16:15:11.294206 4693 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="7179e70d-0681-41fa-ba91-298bd275b282" containerName="nova-scheduler-scheduler" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.323563 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.324134 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0c968a82-7344-4fe7-8b88-f24162ff13df" containerName="nova-metadata-log" containerID="cri-o://f3317123e4fbaa1691af61d8c4f14d5b79b3f346cd0d7ca0a7139f24df159dbd" gracePeriod=30 Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.324843 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0c968a82-7344-4fe7-8b88-f24162ff13df" containerName="nova-metadata-metadata" containerID="cri-o://bef370059b2d9efb625aaa415c6159e803bd0d354eb58d1973a74b9e682ad684" gracePeriod=30 Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.385043 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.500925 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-lmn86"] Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.503776 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.513862 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-lmn86"] Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.587853 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-config\") pod \"dnsmasq-dns-6b7bbf7cf9-lmn86\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.588183 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-lmn86\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.588293 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-lmn86\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.588394 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-lmn86\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.588470 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-lmn86\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.588588 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2snd\" (UniqueName: \"kubernetes.io/projected/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-kube-api-access-f2snd\") pod \"dnsmasq-dns-6b7bbf7cf9-lmn86\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.694723 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-config\") pod \"dnsmasq-dns-6b7bbf7cf9-lmn86\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.694786 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-lmn86\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.694807 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-lmn86\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.694838 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-lmn86\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.694856 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-lmn86\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.694916 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2snd\" (UniqueName: \"kubernetes.io/projected/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-kube-api-access-f2snd\") pod \"dnsmasq-dns-6b7bbf7cf9-lmn86\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.696149 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-config\") pod \"dnsmasq-dns-6b7bbf7cf9-lmn86\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.697125 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-lmn86\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.697668 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-lmn86\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.698181 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-lmn86\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.698679 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-lmn86\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.714119 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2snd\" (UniqueName: \"kubernetes.io/projected/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-kube-api-access-f2snd\") pod \"dnsmasq-dns-6b7bbf7cf9-lmn86\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:15:11 crc kubenswrapper[4693]: I1212 16:15:11.722414 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.179238 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.269486 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ce3e1b2e-09c5-4643-bc6e-6e1218117a13","Type":"ContainerStarted","Data":"a4702c0f5ff4a6ebcc53580c050709f23eca96532d8ff60f4bf9e5f08a09462a"} Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.272024 4693 generic.go:334] "Generic (PLEG): container finished" podID="0c968a82-7344-4fe7-8b88-f24162ff13df" containerID="bef370059b2d9efb625aaa415c6159e803bd0d354eb58d1973a74b9e682ad684" exitCode=0 Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.272049 4693 generic.go:334] "Generic (PLEG): container finished" podID="0c968a82-7344-4fe7-8b88-f24162ff13df" containerID="f3317123e4fbaa1691af61d8c4f14d5b79b3f346cd0d7ca0a7139f24df159dbd" exitCode=143 Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.272148 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c968a82-7344-4fe7-8b88-f24162ff13df","Type":"ContainerDied","Data":"bef370059b2d9efb625aaa415c6159e803bd0d354eb58d1973a74b9e682ad684"} Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.272204 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c968a82-7344-4fe7-8b88-f24162ff13df","Type":"ContainerDied","Data":"f3317123e4fbaa1691af61d8c4f14d5b79b3f346cd0d7ca0a7139f24df159dbd"} Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.335125 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.395524 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-lmn86"] Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.417738 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvlnq\" (UniqueName: \"kubernetes.io/projected/0c968a82-7344-4fe7-8b88-f24162ff13df-kube-api-access-wvlnq\") pod \"0c968a82-7344-4fe7-8b88-f24162ff13df\" (UID: \"0c968a82-7344-4fe7-8b88-f24162ff13df\") " Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.418046 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c968a82-7344-4fe7-8b88-f24162ff13df-nova-metadata-tls-certs\") pod \"0c968a82-7344-4fe7-8b88-f24162ff13df\" (UID: \"0c968a82-7344-4fe7-8b88-f24162ff13df\") " Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.418094 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c968a82-7344-4fe7-8b88-f24162ff13df-combined-ca-bundle\") pod \"0c968a82-7344-4fe7-8b88-f24162ff13df\" (UID: \"0c968a82-7344-4fe7-8b88-f24162ff13df\") " Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.418250 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c968a82-7344-4fe7-8b88-f24162ff13df-config-data\") pod \"0c968a82-7344-4fe7-8b88-f24162ff13df\" (UID: \"0c968a82-7344-4fe7-8b88-f24162ff13df\") " Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.418305 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c968a82-7344-4fe7-8b88-f24162ff13df-logs\") pod \"0c968a82-7344-4fe7-8b88-f24162ff13df\" (UID: \"0c968a82-7344-4fe7-8b88-f24162ff13df\") " Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.421557 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c968a82-7344-4fe7-8b88-f24162ff13df-logs" (OuterVolumeSpecName: "logs") pod "0c968a82-7344-4fe7-8b88-f24162ff13df" (UID: "0c968a82-7344-4fe7-8b88-f24162ff13df"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.426401 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c968a82-7344-4fe7-8b88-f24162ff13df-kube-api-access-wvlnq" (OuterVolumeSpecName: "kube-api-access-wvlnq") pod "0c968a82-7344-4fe7-8b88-f24162ff13df" (UID: "0c968a82-7344-4fe7-8b88-f24162ff13df"). InnerVolumeSpecName "kube-api-access-wvlnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.480406 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c968a82-7344-4fe7-8b88-f24162ff13df-config-data" (OuterVolumeSpecName: "config-data") pod "0c968a82-7344-4fe7-8b88-f24162ff13df" (UID: "0c968a82-7344-4fe7-8b88-f24162ff13df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.483517 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c968a82-7344-4fe7-8b88-f24162ff13df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c968a82-7344-4fe7-8b88-f24162ff13df" (UID: "0c968a82-7344-4fe7-8b88-f24162ff13df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.496831 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c968a82-7344-4fe7-8b88-f24162ff13df-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0c968a82-7344-4fe7-8b88-f24162ff13df" (UID: "0c968a82-7344-4fe7-8b88-f24162ff13df"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.521779 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvlnq\" (UniqueName: \"kubernetes.io/projected/0c968a82-7344-4fe7-8b88-f24162ff13df-kube-api-access-wvlnq\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.521809 4693 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c968a82-7344-4fe7-8b88-f24162ff13df-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.521818 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c968a82-7344-4fe7-8b88-f24162ff13df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.521829 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c968a82-7344-4fe7-8b88-f24162ff13df-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:12 crc kubenswrapper[4693]: I1212 16:15:12.521838 4693 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c968a82-7344-4fe7-8b88-f24162ff13df-logs\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.299809 4693 generic.go:334] "Generic (PLEG): container finished" podID="50ace206-3061-4478-910f-dcbbf46d5f72" containerID="a1a13a252b004359b32de5d2196620bc01615530155c96ef54747c3eb3aeba67" exitCode=0 Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.300146 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-s9j89" event={"ID":"50ace206-3061-4478-910f-dcbbf46d5f72","Type":"ContainerDied","Data":"a1a13a252b004359b32de5d2196620bc01615530155c96ef54747c3eb3aeba67"} Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.304031 4693 generic.go:334] "Generic (PLEG): container finished" podID="199159e4-5fda-4c35-a5f3-c1d84e68b9bc" containerID="4ccc35eb2d2f3108ebb8c2d8ec79dc50b82572e26b0735c6a2f79add86bed9c3" exitCode=0 Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.304102 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" event={"ID":"199159e4-5fda-4c35-a5f3-c1d84e68b9bc","Type":"ContainerDied","Data":"4ccc35eb2d2f3108ebb8c2d8ec79dc50b82572e26b0735c6a2f79add86bed9c3"} Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.304132 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" event={"ID":"199159e4-5fda-4c35-a5f3-c1d84e68b9bc","Type":"ContainerStarted","Data":"dc29deb68a92ec81d825f41d58a9e09d699e7da42517b59a6aa1e84656092f95"} Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.310541 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c968a82-7344-4fe7-8b88-f24162ff13df","Type":"ContainerDied","Data":"c44e148c774d7329834f9934016b16c9b2390f3a23f1ddd59c1f2789d9b79eec"} Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.310570 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.310601 4693 scope.go:117] "RemoveContainer" containerID="bef370059b2d9efb625aaa415c6159e803bd0d354eb58d1973a74b9e682ad684" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.325183 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ff9d8d2e-ddae-487e-b259-334e7df154d4" containerName="nova-api-log" containerID="cri-o://bf7ab1f648271fc982958c5c19e93dd9bcf793297545cd90f355a24cb06ecf9b" gracePeriod=30 Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.326770 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ce3e1b2e-09c5-4643-bc6e-6e1218117a13","Type":"ContainerStarted","Data":"b8bb212c10b4b89c22889ced28e213da557995a37e9e72817ac109706271fb4d"} Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.326956 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ff9d8d2e-ddae-487e-b259-334e7df154d4" containerName="nova-api-api" containerID="cri-o://14b8237db29e568147b3955d6b33057a26fd11dee58e9c24a8a6765d48211746" gracePeriod=30 Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.332859 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.418136 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.418118516 podStartE2EDuration="2.418118516s" podCreationTimestamp="2025-12-12 16:15:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:15:13.39497489 +0000 UTC m=+1740.563614491" watchObservedRunningTime="2025-12-12 16:15:13.418118516 +0000 UTC m=+1740.586758117" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.556671 4693 scope.go:117] "RemoveContainer" containerID="f3317123e4fbaa1691af61d8c4f14d5b79b3f346cd0d7ca0a7139f24df159dbd" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.639759 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.685549 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.726437 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 12 16:15:13 crc kubenswrapper[4693]: E1212 16:15:13.727083 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c968a82-7344-4fe7-8b88-f24162ff13df" containerName="nova-metadata-log" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.727102 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c968a82-7344-4fe7-8b88-f24162ff13df" containerName="nova-metadata-log" Dec 12 16:15:13 crc kubenswrapper[4693]: E1212 16:15:13.727127 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c968a82-7344-4fe7-8b88-f24162ff13df" containerName="nova-metadata-metadata" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.727134 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c968a82-7344-4fe7-8b88-f24162ff13df" containerName="nova-metadata-metadata" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.727392 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c968a82-7344-4fe7-8b88-f24162ff13df" containerName="nova-metadata-metadata" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.727411 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c968a82-7344-4fe7-8b88-f24162ff13df" containerName="nova-metadata-log" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.728689 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.737322 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.737408 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.743122 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.814752 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/459816af-8a53-489f-9a69-fe427a9e2ef3-config-data\") pod \"nova-metadata-0\" (UID: \"459816af-8a53-489f-9a69-fe427a9e2ef3\") " pod="openstack/nova-metadata-0" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.814808 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/459816af-8a53-489f-9a69-fe427a9e2ef3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"459816af-8a53-489f-9a69-fe427a9e2ef3\") " pod="openstack/nova-metadata-0" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.814854 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/459816af-8a53-489f-9a69-fe427a9e2ef3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"459816af-8a53-489f-9a69-fe427a9e2ef3\") " pod="openstack/nova-metadata-0" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.814957 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px457\" (UniqueName: \"kubernetes.io/projected/459816af-8a53-489f-9a69-fe427a9e2ef3-kube-api-access-px457\") pod \"nova-metadata-0\" (UID: \"459816af-8a53-489f-9a69-fe427a9e2ef3\") " pod="openstack/nova-metadata-0" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.815050 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/459816af-8a53-489f-9a69-fe427a9e2ef3-logs\") pod \"nova-metadata-0\" (UID: \"459816af-8a53-489f-9a69-fe427a9e2ef3\") " pod="openstack/nova-metadata-0" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.917285 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px457\" (UniqueName: \"kubernetes.io/projected/459816af-8a53-489f-9a69-fe427a9e2ef3-kube-api-access-px457\") pod \"nova-metadata-0\" (UID: \"459816af-8a53-489f-9a69-fe427a9e2ef3\") " pod="openstack/nova-metadata-0" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.917382 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/459816af-8a53-489f-9a69-fe427a9e2ef3-logs\") pod \"nova-metadata-0\" (UID: \"459816af-8a53-489f-9a69-fe427a9e2ef3\") " pod="openstack/nova-metadata-0" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.917482 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/459816af-8a53-489f-9a69-fe427a9e2ef3-config-data\") pod \"nova-metadata-0\" (UID: \"459816af-8a53-489f-9a69-fe427a9e2ef3\") " pod="openstack/nova-metadata-0" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.917504 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/459816af-8a53-489f-9a69-fe427a9e2ef3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"459816af-8a53-489f-9a69-fe427a9e2ef3\") " pod="openstack/nova-metadata-0" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.917530 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/459816af-8a53-489f-9a69-fe427a9e2ef3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"459816af-8a53-489f-9a69-fe427a9e2ef3\") " pod="openstack/nova-metadata-0" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.917907 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/459816af-8a53-489f-9a69-fe427a9e2ef3-logs\") pod \"nova-metadata-0\" (UID: \"459816af-8a53-489f-9a69-fe427a9e2ef3\") " pod="openstack/nova-metadata-0" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.923232 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/459816af-8a53-489f-9a69-fe427a9e2ef3-config-data\") pod \"nova-metadata-0\" (UID: \"459816af-8a53-489f-9a69-fe427a9e2ef3\") " pod="openstack/nova-metadata-0" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.924674 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/459816af-8a53-489f-9a69-fe427a9e2ef3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"459816af-8a53-489f-9a69-fe427a9e2ef3\") " pod="openstack/nova-metadata-0" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.935196 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/459816af-8a53-489f-9a69-fe427a9e2ef3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"459816af-8a53-489f-9a69-fe427a9e2ef3\") " pod="openstack/nova-metadata-0" Dec 12 16:15:13 crc kubenswrapper[4693]: I1212 16:15:13.938483 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px457\" (UniqueName: \"kubernetes.io/projected/459816af-8a53-489f-9a69-fe427a9e2ef3-kube-api-access-px457\") pod \"nova-metadata-0\" (UID: \"459816af-8a53-489f-9a69-fe427a9e2ef3\") " pod="openstack/nova-metadata-0" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.051870 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.060127 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.120825 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h8d6\" (UniqueName: \"kubernetes.io/projected/7179e70d-0681-41fa-ba91-298bd275b282-kube-api-access-7h8d6\") pod \"7179e70d-0681-41fa-ba91-298bd275b282\" (UID: \"7179e70d-0681-41fa-ba91-298bd275b282\") " Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.121065 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7179e70d-0681-41fa-ba91-298bd275b282-config-data\") pod \"7179e70d-0681-41fa-ba91-298bd275b282\" (UID: \"7179e70d-0681-41fa-ba91-298bd275b282\") " Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.121246 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7179e70d-0681-41fa-ba91-298bd275b282-combined-ca-bundle\") pod \"7179e70d-0681-41fa-ba91-298bd275b282\" (UID: \"7179e70d-0681-41fa-ba91-298bd275b282\") " Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.128142 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7179e70d-0681-41fa-ba91-298bd275b282-kube-api-access-7h8d6" (OuterVolumeSpecName: "kube-api-access-7h8d6") pod "7179e70d-0681-41fa-ba91-298bd275b282" (UID: "7179e70d-0681-41fa-ba91-298bd275b282"). InnerVolumeSpecName "kube-api-access-7h8d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.167674 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7179e70d-0681-41fa-ba91-298bd275b282-config-data" (OuterVolumeSpecName: "config-data") pod "7179e70d-0681-41fa-ba91-298bd275b282" (UID: "7179e70d-0681-41fa-ba91-298bd275b282"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.178369 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7179e70d-0681-41fa-ba91-298bd275b282-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7179e70d-0681-41fa-ba91-298bd275b282" (UID: "7179e70d-0681-41fa-ba91-298bd275b282"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.224808 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h8d6\" (UniqueName: \"kubernetes.io/projected/7179e70d-0681-41fa-ba91-298bd275b282-kube-api-access-7h8d6\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.225097 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7179e70d-0681-41fa-ba91-298bd275b282-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.225114 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7179e70d-0681-41fa-ba91-298bd275b282-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.348104 4693 generic.go:334] "Generic (PLEG): container finished" podID="7179e70d-0681-41fa-ba91-298bd275b282" containerID="f3551060ac6700170c940ec8c451145d3d7c0cae099745e1a33cf8ab94466485" exitCode=0 Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.348306 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.349310 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7179e70d-0681-41fa-ba91-298bd275b282","Type":"ContainerDied","Data":"f3551060ac6700170c940ec8c451145d3d7c0cae099745e1a33cf8ab94466485"} Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.349365 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7179e70d-0681-41fa-ba91-298bd275b282","Type":"ContainerDied","Data":"ac2cd5ee41459461f4be67acc8800f2885f1d0d49ffc626a0fe8f368fbbe3f92"} Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.349384 4693 scope.go:117] "RemoveContainer" containerID="f3551060ac6700170c940ec8c451145d3d7c0cae099745e1a33cf8ab94466485" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.362440 4693 generic.go:334] "Generic (PLEG): container finished" podID="ff9d8d2e-ddae-487e-b259-334e7df154d4" containerID="bf7ab1f648271fc982958c5c19e93dd9bcf793297545cd90f355a24cb06ecf9b" exitCode=143 Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.362507 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff9d8d2e-ddae-487e-b259-334e7df154d4","Type":"ContainerDied","Data":"bf7ab1f648271fc982958c5c19e93dd9bcf793297545cd90f355a24cb06ecf9b"} Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.378908 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" event={"ID":"199159e4-5fda-4c35-a5f3-c1d84e68b9bc","Type":"ContainerStarted","Data":"593e4afc2d92a69577e3dbda2c5d309968db70a1d7ea9f7a8fc0fef7793616ba"} Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.379954 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.394666 4693 scope.go:117] "RemoveContainer" containerID="f3551060ac6700170c940ec8c451145d3d7c0cae099745e1a33cf8ab94466485" Dec 12 16:15:14 crc kubenswrapper[4693]: E1212 16:15:14.396613 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3551060ac6700170c940ec8c451145d3d7c0cae099745e1a33cf8ab94466485\": container with ID starting with f3551060ac6700170c940ec8c451145d3d7c0cae099745e1a33cf8ab94466485 not found: ID does not exist" containerID="f3551060ac6700170c940ec8c451145d3d7c0cae099745e1a33cf8ab94466485" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.396674 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3551060ac6700170c940ec8c451145d3d7c0cae099745e1a33cf8ab94466485"} err="failed to get container status \"f3551060ac6700170c940ec8c451145d3d7c0cae099745e1a33cf8ab94466485\": rpc error: code = NotFound desc = could not find container \"f3551060ac6700170c940ec8c451145d3d7c0cae099745e1a33cf8ab94466485\": container with ID starting with f3551060ac6700170c940ec8c451145d3d7c0cae099745e1a33cf8ab94466485 not found: ID does not exist" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.439314 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.463927 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.483794 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 16:15:14 crc kubenswrapper[4693]: E1212 16:15:14.484408 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7179e70d-0681-41fa-ba91-298bd275b282" containerName="nova-scheduler-scheduler" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.484424 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="7179e70d-0681-41fa-ba91-298bd275b282" containerName="nova-scheduler-scheduler" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.484753 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="7179e70d-0681-41fa-ba91-298bd275b282" containerName="nova-scheduler-scheduler" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.485602 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.493346 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.533031 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69wd4\" (UniqueName: \"kubernetes.io/projected/528ba800-12cf-4149-82df-559b4ea15ad7-kube-api-access-69wd4\") pod \"nova-scheduler-0\" (UID: \"528ba800-12cf-4149-82df-559b4ea15ad7\") " pod="openstack/nova-scheduler-0" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.533135 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/528ba800-12cf-4149-82df-559b4ea15ad7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"528ba800-12cf-4149-82df-559b4ea15ad7\") " pod="openstack/nova-scheduler-0" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.533201 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/528ba800-12cf-4149-82df-559b4ea15ad7-config-data\") pod \"nova-scheduler-0\" (UID: \"528ba800-12cf-4149-82df-559b4ea15ad7\") " pod="openstack/nova-scheduler-0" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.548587 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" podStartSLOduration=3.548566407 podStartE2EDuration="3.548566407s" podCreationTimestamp="2025-12-12 16:15:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:15:14.401889611 +0000 UTC m=+1741.570529212" watchObservedRunningTime="2025-12-12 16:15:14.548566407 +0000 UTC m=+1741.717206008" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.609337 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.635859 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/528ba800-12cf-4149-82df-559b4ea15ad7-config-data\") pod \"nova-scheduler-0\" (UID: \"528ba800-12cf-4149-82df-559b4ea15ad7\") " pod="openstack/nova-scheduler-0" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.636037 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69wd4\" (UniqueName: \"kubernetes.io/projected/528ba800-12cf-4149-82df-559b4ea15ad7-kube-api-access-69wd4\") pod \"nova-scheduler-0\" (UID: \"528ba800-12cf-4149-82df-559b4ea15ad7\") " pod="openstack/nova-scheduler-0" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.636125 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/528ba800-12cf-4149-82df-559b4ea15ad7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"528ba800-12cf-4149-82df-559b4ea15ad7\") " pod="openstack/nova-scheduler-0" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.649504 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/528ba800-12cf-4149-82df-559b4ea15ad7-config-data\") pod \"nova-scheduler-0\" (UID: \"528ba800-12cf-4149-82df-559b4ea15ad7\") " pod="openstack/nova-scheduler-0" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.652087 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/528ba800-12cf-4149-82df-559b4ea15ad7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"528ba800-12cf-4149-82df-559b4ea15ad7\") " pod="openstack/nova-scheduler-0" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.701306 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69wd4\" (UniqueName: \"kubernetes.io/projected/528ba800-12cf-4149-82df-559b4ea15ad7-kube-api-access-69wd4\") pod \"nova-scheduler-0\" (UID: \"528ba800-12cf-4149-82df-559b4ea15ad7\") " pod="openstack/nova-scheduler-0" Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.701797 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 16:15:14 crc kubenswrapper[4693]: I1212 16:15:14.885125 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 12 16:15:15 crc kubenswrapper[4693]: I1212 16:15:15.188657 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-s9j89" Dec 12 16:15:15 crc kubenswrapper[4693]: I1212 16:15:15.257337 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ace206-3061-4478-910f-dcbbf46d5f72-config-data\") pod \"50ace206-3061-4478-910f-dcbbf46d5f72\" (UID: \"50ace206-3061-4478-910f-dcbbf46d5f72\") " Dec 12 16:15:15 crc kubenswrapper[4693]: I1212 16:15:15.257547 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ace206-3061-4478-910f-dcbbf46d5f72-combined-ca-bundle\") pod \"50ace206-3061-4478-910f-dcbbf46d5f72\" (UID: \"50ace206-3061-4478-910f-dcbbf46d5f72\") " Dec 12 16:15:15 crc kubenswrapper[4693]: I1212 16:15:15.257673 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50ace206-3061-4478-910f-dcbbf46d5f72-scripts\") pod \"50ace206-3061-4478-910f-dcbbf46d5f72\" (UID: \"50ace206-3061-4478-910f-dcbbf46d5f72\") " Dec 12 16:15:15 crc kubenswrapper[4693]: I1212 16:15:15.257707 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b4xj\" (UniqueName: \"kubernetes.io/projected/50ace206-3061-4478-910f-dcbbf46d5f72-kube-api-access-6b4xj\") pod \"50ace206-3061-4478-910f-dcbbf46d5f72\" (UID: \"50ace206-3061-4478-910f-dcbbf46d5f72\") " Dec 12 16:15:15 crc kubenswrapper[4693]: I1212 16:15:15.263480 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ace206-3061-4478-910f-dcbbf46d5f72-kube-api-access-6b4xj" (OuterVolumeSpecName: "kube-api-access-6b4xj") pod "50ace206-3061-4478-910f-dcbbf46d5f72" (UID: "50ace206-3061-4478-910f-dcbbf46d5f72"). InnerVolumeSpecName "kube-api-access-6b4xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:15:15 crc kubenswrapper[4693]: I1212 16:15:15.264081 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ace206-3061-4478-910f-dcbbf46d5f72-scripts" (OuterVolumeSpecName: "scripts") pod "50ace206-3061-4478-910f-dcbbf46d5f72" (UID: "50ace206-3061-4478-910f-dcbbf46d5f72"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:15 crc kubenswrapper[4693]: I1212 16:15:15.320423 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ace206-3061-4478-910f-dcbbf46d5f72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50ace206-3061-4478-910f-dcbbf46d5f72" (UID: "50ace206-3061-4478-910f-dcbbf46d5f72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:15 crc kubenswrapper[4693]: I1212 16:15:15.325556 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ace206-3061-4478-910f-dcbbf46d5f72-config-data" (OuterVolumeSpecName: "config-data") pod "50ace206-3061-4478-910f-dcbbf46d5f72" (UID: "50ace206-3061-4478-910f-dcbbf46d5f72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:15 crc kubenswrapper[4693]: I1212 16:15:15.370248 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ace206-3061-4478-910f-dcbbf46d5f72-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:15 crc kubenswrapper[4693]: I1212 16:15:15.370298 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ace206-3061-4478-910f-dcbbf46d5f72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:15 crc kubenswrapper[4693]: I1212 16:15:15.370311 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50ace206-3061-4478-910f-dcbbf46d5f72-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:15 crc kubenswrapper[4693]: I1212 16:15:15.370320 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b4xj\" (UniqueName: \"kubernetes.io/projected/50ace206-3061-4478-910f-dcbbf46d5f72-kube-api-access-6b4xj\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:15 crc kubenswrapper[4693]: I1212 16:15:15.411991 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-s9j89" Dec 12 16:15:15 crc kubenswrapper[4693]: I1212 16:15:15.416607 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c968a82-7344-4fe7-8b88-f24162ff13df" path="/var/lib/kubelet/pods/0c968a82-7344-4fe7-8b88-f24162ff13df/volumes" Dec 12 16:15:15 crc kubenswrapper[4693]: I1212 16:15:15.423148 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7179e70d-0681-41fa-ba91-298bd275b282" path="/var/lib/kubelet/pods/7179e70d-0681-41fa-ba91-298bd275b282/volumes" Dec 12 16:15:15 crc kubenswrapper[4693]: I1212 16:15:15.425191 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-s9j89" event={"ID":"50ace206-3061-4478-910f-dcbbf46d5f72","Type":"ContainerDied","Data":"c254931a51ec3ecc56961163325db68889ddb26819b204a55ce47710f555cc45"} Dec 12 16:15:15 crc kubenswrapper[4693]: I1212 16:15:15.425222 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c254931a51ec3ecc56961163325db68889ddb26819b204a55ce47710f555cc45" Dec 12 16:15:15 crc kubenswrapper[4693]: I1212 16:15:15.426015 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"459816af-8a53-489f-9a69-fe427a9e2ef3","Type":"ContainerStarted","Data":"08f8ce06ec9be2d2f8155075eab22a425745403a4ef1938207d0677e7eab0302"} Dec 12 16:15:15 crc kubenswrapper[4693]: I1212 16:15:15.426047 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"459816af-8a53-489f-9a69-fe427a9e2ef3","Type":"ContainerStarted","Data":"62595d8701f3d2693290725f1154aa8c74c8031944e5c7b00fb9a2c1202a63f6"} Dec 12 16:15:15 crc kubenswrapper[4693]: W1212 16:15:15.617640 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod528ba800_12cf_4149_82df_559b4ea15ad7.slice/crio-6e6e7f7678b92379c28be0cc67472a09f4af494486894f6f6e7d083af40568b5 WatchSource:0}: Error finding container 6e6e7f7678b92379c28be0cc67472a09f4af494486894f6f6e7d083af40568b5: Status 404 returned error can't find the container with id 6e6e7f7678b92379c28be0cc67472a09f4af494486894f6f6e7d083af40568b5 Dec 12 16:15:15 crc kubenswrapper[4693]: I1212 16:15:15.619851 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.206067 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.206740 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38afd4c5-3e1b-4a5f-a82c-ac270a369417" containerName="ceilometer-central-agent" containerID="cri-o://84c0935b6357be2256a4d54f7e475cadf80d8b3d5c98477ed87493e12be0c57c" gracePeriod=30 Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.206786 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38afd4c5-3e1b-4a5f-a82c-ac270a369417" containerName="proxy-httpd" containerID="cri-o://cc3b7ab3806a0bd3be79ce34cf1cb76dfd023a66c40be7b4fa8fffb2f7519fb7" gracePeriod=30 Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.206827 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38afd4c5-3e1b-4a5f-a82c-ac270a369417" containerName="sg-core" containerID="cri-o://c10be96a768cad36cf79bc275c59b1767b7236beb6632bce964d2dc57a12150a" gracePeriod=30 Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.206872 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38afd4c5-3e1b-4a5f-a82c-ac270a369417" containerName="ceilometer-notification-agent" containerID="cri-o://c57fc982e7da26280b4da3a615702607dad9ceb5264526dd38d48c56ca0a9533" gracePeriod=30 Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.310967 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="38afd4c5-3e1b-4a5f-a82c-ac270a369417" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.240:3000/\": read tcp 10.217.0.2:43508->10.217.0.240:3000: read: connection reset by peer" Dec 12 16:15:16 crc kubenswrapper[4693]: E1212 16:15:16.420733 4693 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38afd4c5_3e1b_4a5f_a82c_ac270a369417.slice/crio-conmon-c10be96a768cad36cf79bc275c59b1767b7236beb6632bce964d2dc57a12150a.scope\": RecentStats: unable to find data in memory cache]" Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.505534 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"528ba800-12cf-4149-82df-559b4ea15ad7","Type":"ContainerStarted","Data":"ea445a038ae2162ff1978140c8151fdc0e5fbc131c7609ae8bc1a452b2140276"} Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.505580 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"528ba800-12cf-4149-82df-559b4ea15ad7","Type":"ContainerStarted","Data":"6e6e7f7678b92379c28be0cc67472a09f4af494486894f6f6e7d083af40568b5"} Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.526994 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Dec 12 16:15:16 crc kubenswrapper[4693]: E1212 16:15:16.527638 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ace206-3061-4478-910f-dcbbf46d5f72" containerName="aodh-db-sync" Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.527656 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ace206-3061-4478-910f-dcbbf46d5f72" containerName="aodh-db-sync" Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.527883 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="50ace206-3061-4478-910f-dcbbf46d5f72" containerName="aodh-db-sync" Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.530131 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.535864 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-jzr72" Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.536055 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.536148 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.548798 4693 generic.go:334] "Generic (PLEG): container finished" podID="38afd4c5-3e1b-4a5f-a82c-ac270a369417" containerID="cc3b7ab3806a0bd3be79ce34cf1cb76dfd023a66c40be7b4fa8fffb2f7519fb7" exitCode=0 Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.548830 4693 generic.go:334] "Generic (PLEG): container finished" podID="38afd4c5-3e1b-4a5f-a82c-ac270a369417" containerID="c10be96a768cad36cf79bc275c59b1767b7236beb6632bce964d2dc57a12150a" exitCode=2 Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.548874 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38afd4c5-3e1b-4a5f-a82c-ac270a369417","Type":"ContainerDied","Data":"cc3b7ab3806a0bd3be79ce34cf1cb76dfd023a66c40be7b4fa8fffb2f7519fb7"} Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.548900 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38afd4c5-3e1b-4a5f-a82c-ac270a369417","Type":"ContainerDied","Data":"c10be96a768cad36cf79bc275c59b1767b7236beb6632bce964d2dc57a12150a"} Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.552882 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"459816af-8a53-489f-9a69-fe427a9e2ef3","Type":"ContainerStarted","Data":"5f7d99afc43c6b9ae94d921b4a43b50f294b39f6b865c8e6e215eddb48eefa09"} Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.582485 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.617212 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.61719072 podStartE2EDuration="2.61719072s" podCreationTimestamp="2025-12-12 16:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:15:16.523773334 +0000 UTC m=+1743.692412935" watchObservedRunningTime="2025-12-12 16:15:16.61719072 +0000 UTC m=+1743.785830321" Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.618878 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db313a41-7be9-4215-846e-d480bbdb0186-scripts\") pod \"aodh-0\" (UID: \"db313a41-7be9-4215-846e-d480bbdb0186\") " pod="openstack/aodh-0" Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.618937 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv5g2\" (UniqueName: \"kubernetes.io/projected/db313a41-7be9-4215-846e-d480bbdb0186-kube-api-access-gv5g2\") pod \"aodh-0\" (UID: \"db313a41-7be9-4215-846e-d480bbdb0186\") " pod="openstack/aodh-0" Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.619073 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db313a41-7be9-4215-846e-d480bbdb0186-combined-ca-bundle\") pod \"aodh-0\" (UID: \"db313a41-7be9-4215-846e-d480bbdb0186\") " pod="openstack/aodh-0" Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.619135 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db313a41-7be9-4215-846e-d480bbdb0186-config-data\") pod \"aodh-0\" (UID: \"db313a41-7be9-4215-846e-d480bbdb0186\") " pod="openstack/aodh-0" Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.649767 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.64974606 podStartE2EDuration="3.64974606s" podCreationTimestamp="2025-12-12 16:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:15:16.587644411 +0000 UTC m=+1743.756284022" watchObservedRunningTime="2025-12-12 16:15:16.64974606 +0000 UTC m=+1743.818385651" Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.722095 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db313a41-7be9-4215-846e-d480bbdb0186-scripts\") pod \"aodh-0\" (UID: \"db313a41-7be9-4215-846e-d480bbdb0186\") " pod="openstack/aodh-0" Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.722144 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv5g2\" (UniqueName: \"kubernetes.io/projected/db313a41-7be9-4215-846e-d480bbdb0186-kube-api-access-gv5g2\") pod \"aodh-0\" (UID: \"db313a41-7be9-4215-846e-d480bbdb0186\") " pod="openstack/aodh-0" Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.722189 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db313a41-7be9-4215-846e-d480bbdb0186-combined-ca-bundle\") pod \"aodh-0\" (UID: \"db313a41-7be9-4215-846e-d480bbdb0186\") " pod="openstack/aodh-0" Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.722209 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db313a41-7be9-4215-846e-d480bbdb0186-config-data\") pod \"aodh-0\" (UID: \"db313a41-7be9-4215-846e-d480bbdb0186\") " pod="openstack/aodh-0" Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.729063 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db313a41-7be9-4215-846e-d480bbdb0186-combined-ca-bundle\") pod \"aodh-0\" (UID: \"db313a41-7be9-4215-846e-d480bbdb0186\") " pod="openstack/aodh-0" Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.732100 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db313a41-7be9-4215-846e-d480bbdb0186-config-data\") pod \"aodh-0\" (UID: \"db313a41-7be9-4215-846e-d480bbdb0186\") " pod="openstack/aodh-0" Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.739722 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db313a41-7be9-4215-846e-d480bbdb0186-scripts\") pod \"aodh-0\" (UID: \"db313a41-7be9-4215-846e-d480bbdb0186\") " pod="openstack/aodh-0" Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.761887 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv5g2\" (UniqueName: \"kubernetes.io/projected/db313a41-7be9-4215-846e-d480bbdb0186-kube-api-access-gv5g2\") pod \"aodh-0\" (UID: \"db313a41-7be9-4215-846e-d480bbdb0186\") " pod="openstack/aodh-0" Dec 12 16:15:16 crc kubenswrapper[4693]: I1212 16:15:16.862691 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 12 16:15:17 crc kubenswrapper[4693]: I1212 16:15:17.517933 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 12 16:15:17 crc kubenswrapper[4693]: I1212 16:15:17.540060 4693 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 16:15:17 crc kubenswrapper[4693]: I1212 16:15:17.588182 4693 generic.go:334] "Generic (PLEG): container finished" podID="ff9d8d2e-ddae-487e-b259-334e7df154d4" containerID="14b8237db29e568147b3955d6b33057a26fd11dee58e9c24a8a6765d48211746" exitCode=0 Dec 12 16:15:17 crc kubenswrapper[4693]: I1212 16:15:17.588564 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff9d8d2e-ddae-487e-b259-334e7df154d4","Type":"ContainerDied","Data":"14b8237db29e568147b3955d6b33057a26fd11dee58e9c24a8a6765d48211746"} Dec 12 16:15:17 crc kubenswrapper[4693]: I1212 16:15:17.595148 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db313a41-7be9-4215-846e-d480bbdb0186","Type":"ContainerStarted","Data":"769fe7cf556a09922857e02a0a409bb3d902a5a60d2e27ac6b26a9dfafb044a7"} Dec 12 16:15:17 crc kubenswrapper[4693]: I1212 16:15:17.598561 4693 generic.go:334] "Generic (PLEG): container finished" podID="38afd4c5-3e1b-4a5f-a82c-ac270a369417" containerID="c57fc982e7da26280b4da3a615702607dad9ceb5264526dd38d48c56ca0a9533" exitCode=0 Dec 12 16:15:17 crc kubenswrapper[4693]: I1212 16:15:17.598589 4693 generic.go:334] "Generic (PLEG): container finished" podID="38afd4c5-3e1b-4a5f-a82c-ac270a369417" containerID="84c0935b6357be2256a4d54f7e475cadf80d8b3d5c98477ed87493e12be0c57c" exitCode=0 Dec 12 16:15:17 crc kubenswrapper[4693]: I1212 16:15:17.598615 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38afd4c5-3e1b-4a5f-a82c-ac270a369417","Type":"ContainerDied","Data":"c57fc982e7da26280b4da3a615702607dad9ceb5264526dd38d48c56ca0a9533"} Dec 12 16:15:17 crc kubenswrapper[4693]: I1212 16:15:17.598640 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38afd4c5-3e1b-4a5f-a82c-ac270a369417","Type":"ContainerDied","Data":"84c0935b6357be2256a4d54f7e475cadf80d8b3d5c98477ed87493e12be0c57c"} Dec 12 16:15:17 crc kubenswrapper[4693]: I1212 16:15:17.936878 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 16:15:17 crc kubenswrapper[4693]: I1212 16:15:17.962539 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff9d8d2e-ddae-487e-b259-334e7df154d4-config-data\") pod \"ff9d8d2e-ddae-487e-b259-334e7df154d4\" (UID: \"ff9d8d2e-ddae-487e-b259-334e7df154d4\") " Dec 12 16:15:17 crc kubenswrapper[4693]: I1212 16:15:17.962646 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff9d8d2e-ddae-487e-b259-334e7df154d4-logs\") pod \"ff9d8d2e-ddae-487e-b259-334e7df154d4\" (UID: \"ff9d8d2e-ddae-487e-b259-334e7df154d4\") " Dec 12 16:15:17 crc kubenswrapper[4693]: I1212 16:15:17.962872 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6dch\" (UniqueName: \"kubernetes.io/projected/ff9d8d2e-ddae-487e-b259-334e7df154d4-kube-api-access-f6dch\") pod \"ff9d8d2e-ddae-487e-b259-334e7df154d4\" (UID: \"ff9d8d2e-ddae-487e-b259-334e7df154d4\") " Dec 12 16:15:17 crc kubenswrapper[4693]: I1212 16:15:17.962889 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff9d8d2e-ddae-487e-b259-334e7df154d4-combined-ca-bundle\") pod \"ff9d8d2e-ddae-487e-b259-334e7df154d4\" (UID: \"ff9d8d2e-ddae-487e-b259-334e7df154d4\") " Dec 12 16:15:17 crc kubenswrapper[4693]: I1212 16:15:17.970713 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff9d8d2e-ddae-487e-b259-334e7df154d4-logs" (OuterVolumeSpecName: "logs") pod "ff9d8d2e-ddae-487e-b259-334e7df154d4" (UID: "ff9d8d2e-ddae-487e-b259-334e7df154d4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.001079 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff9d8d2e-ddae-487e-b259-334e7df154d4-kube-api-access-f6dch" (OuterVolumeSpecName: "kube-api-access-f6dch") pod "ff9d8d2e-ddae-487e-b259-334e7df154d4" (UID: "ff9d8d2e-ddae-487e-b259-334e7df154d4"). InnerVolumeSpecName "kube-api-access-f6dch". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.063489 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff9d8d2e-ddae-487e-b259-334e7df154d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff9d8d2e-ddae-487e-b259-334e7df154d4" (UID: "ff9d8d2e-ddae-487e-b259-334e7df154d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.068543 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6dch\" (UniqueName: \"kubernetes.io/projected/ff9d8d2e-ddae-487e-b259-334e7df154d4-kube-api-access-f6dch\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.068580 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff9d8d2e-ddae-487e-b259-334e7df154d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.068593 4693 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff9d8d2e-ddae-487e-b259-334e7df154d4-logs\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.071653 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff9d8d2e-ddae-487e-b259-334e7df154d4-config-data" (OuterVolumeSpecName: "config-data") pod "ff9d8d2e-ddae-487e-b259-334e7df154d4" (UID: "ff9d8d2e-ddae-487e-b259-334e7df154d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.172316 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff9d8d2e-ddae-487e-b259-334e7df154d4-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.175224 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.273256 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-combined-ca-bundle\") pod \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.273741 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g6cg\" (UniqueName: \"kubernetes.io/projected/38afd4c5-3e1b-4a5f-a82c-ac270a369417-kube-api-access-5g6cg\") pod \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.273790 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-sg-core-conf-yaml\") pod \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.273822 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-config-data\") pod \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.273857 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38afd4c5-3e1b-4a5f-a82c-ac270a369417-run-httpd\") pod \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.273916 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38afd4c5-3e1b-4a5f-a82c-ac270a369417-log-httpd\") pod \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.273954 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-scripts\") pod \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\" (UID: \"38afd4c5-3e1b-4a5f-a82c-ac270a369417\") " Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.274899 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38afd4c5-3e1b-4a5f-a82c-ac270a369417-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "38afd4c5-3e1b-4a5f-a82c-ac270a369417" (UID: "38afd4c5-3e1b-4a5f-a82c-ac270a369417"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.275120 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38afd4c5-3e1b-4a5f-a82c-ac270a369417-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "38afd4c5-3e1b-4a5f-a82c-ac270a369417" (UID: "38afd4c5-3e1b-4a5f-a82c-ac270a369417"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.278134 4693 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38afd4c5-3e1b-4a5f-a82c-ac270a369417-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.278165 4693 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38afd4c5-3e1b-4a5f-a82c-ac270a369417-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.278566 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-scripts" (OuterVolumeSpecName: "scripts") pod "38afd4c5-3e1b-4a5f-a82c-ac270a369417" (UID: "38afd4c5-3e1b-4a5f-a82c-ac270a369417"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.280467 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38afd4c5-3e1b-4a5f-a82c-ac270a369417-kube-api-access-5g6cg" (OuterVolumeSpecName: "kube-api-access-5g6cg") pod "38afd4c5-3e1b-4a5f-a82c-ac270a369417" (UID: "38afd4c5-3e1b-4a5f-a82c-ac270a369417"). InnerVolumeSpecName "kube-api-access-5g6cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.371175 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "38afd4c5-3e1b-4a5f-a82c-ac270a369417" (UID: "38afd4c5-3e1b-4a5f-a82c-ac270a369417"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.382757 4693 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.382801 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.382816 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g6cg\" (UniqueName: \"kubernetes.io/projected/38afd4c5-3e1b-4a5f-a82c-ac270a369417-kube-api-access-5g6cg\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.529764 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38afd4c5-3e1b-4a5f-a82c-ac270a369417" (UID: "38afd4c5-3e1b-4a5f-a82c-ac270a369417"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.538427 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-config-data" (OuterVolumeSpecName: "config-data") pod "38afd4c5-3e1b-4a5f-a82c-ac270a369417" (UID: "38afd4c5-3e1b-4a5f-a82c-ac270a369417"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.587305 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.587604 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38afd4c5-3e1b-4a5f-a82c-ac270a369417-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.612771 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.613611 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff9d8d2e-ddae-487e-b259-334e7df154d4","Type":"ContainerDied","Data":"8d0bf36620b1dd640e6f18b6a5d0d9cc4e408e96afa6a3563e7be1432e136d21"} Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.613683 4693 scope.go:117] "RemoveContainer" containerID="14b8237db29e568147b3955d6b33057a26fd11dee58e9c24a8a6765d48211746" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.616886 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db313a41-7be9-4215-846e-d480bbdb0186","Type":"ContainerStarted","Data":"fbf0bd6a4f399fc4544f70f70af6645033e67a4a5fbf5c9be0fbb9243a343c89"} Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.622048 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38afd4c5-3e1b-4a5f-a82c-ac270a369417","Type":"ContainerDied","Data":"ee64ad6b51d829d9d249993fa9d8030a5211340d4d46d25dc7ce726f329a56b9"} Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.622216 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.645470 4693 scope.go:117] "RemoveContainer" containerID="bf7ab1f648271fc982958c5c19e93dd9bcf793297545cd90f355a24cb06ecf9b" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.659013 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.675130 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.689624 4693 scope.go:117] "RemoveContainer" containerID="cc3b7ab3806a0bd3be79ce34cf1cb76dfd023a66c40be7b4fa8fffb2f7519fb7" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.689812 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.713092 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.726088 4693 scope.go:117] "RemoveContainer" containerID="c10be96a768cad36cf79bc275c59b1767b7236beb6632bce964d2dc57a12150a" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.739343 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 12 16:15:18 crc kubenswrapper[4693]: E1212 16:15:18.739978 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9d8d2e-ddae-487e-b259-334e7df154d4" containerName="nova-api-api" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.739993 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9d8d2e-ddae-487e-b259-334e7df154d4" containerName="nova-api-api" Dec 12 16:15:18 crc kubenswrapper[4693]: E1212 16:15:18.740013 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38afd4c5-3e1b-4a5f-a82c-ac270a369417" containerName="ceilometer-central-agent" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.740021 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="38afd4c5-3e1b-4a5f-a82c-ac270a369417" containerName="ceilometer-central-agent" Dec 12 16:15:18 crc kubenswrapper[4693]: E1212 16:15:18.740051 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38afd4c5-3e1b-4a5f-a82c-ac270a369417" containerName="sg-core" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.740058 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="38afd4c5-3e1b-4a5f-a82c-ac270a369417" containerName="sg-core" Dec 12 16:15:18 crc kubenswrapper[4693]: E1212 16:15:18.740070 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9d8d2e-ddae-487e-b259-334e7df154d4" containerName="nova-api-log" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.740076 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9d8d2e-ddae-487e-b259-334e7df154d4" containerName="nova-api-log" Dec 12 16:15:18 crc kubenswrapper[4693]: E1212 16:15:18.740109 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38afd4c5-3e1b-4a5f-a82c-ac270a369417" containerName="ceilometer-notification-agent" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.740115 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="38afd4c5-3e1b-4a5f-a82c-ac270a369417" containerName="ceilometer-notification-agent" Dec 12 16:15:18 crc kubenswrapper[4693]: E1212 16:15:18.740130 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38afd4c5-3e1b-4a5f-a82c-ac270a369417" containerName="proxy-httpd" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.740135 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="38afd4c5-3e1b-4a5f-a82c-ac270a369417" containerName="proxy-httpd" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.740376 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="38afd4c5-3e1b-4a5f-a82c-ac270a369417" containerName="ceilometer-notification-agent" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.740392 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="38afd4c5-3e1b-4a5f-a82c-ac270a369417" containerName="ceilometer-central-agent" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.740403 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="38afd4c5-3e1b-4a5f-a82c-ac270a369417" containerName="proxy-httpd" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.740417 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff9d8d2e-ddae-487e-b259-334e7df154d4" containerName="nova-api-api" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.740446 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff9d8d2e-ddae-487e-b259-334e7df154d4" containerName="nova-api-log" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.740455 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="38afd4c5-3e1b-4a5f-a82c-ac270a369417" containerName="sg-core" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.741750 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.741836 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.753264 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.753563 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.753809 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.765699 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.770817 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.773489 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.773815 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.780479 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.795585 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-run-httpd\") pod \"ceilometer-0\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.795829 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-public-tls-certs\") pod \"nova-api-0\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " pod="openstack/nova-api-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.795883 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-config-data\") pod \"nova-api-0\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " pod="openstack/nova-api-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.795948 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.796109 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt894\" (UniqueName: \"kubernetes.io/projected/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-kube-api-access-gt894\") pod \"nova-api-0\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " pod="openstack/nova-api-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.796161 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.796203 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " pod="openstack/nova-api-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.796683 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-logs\") pod \"nova-api-0\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " pod="openstack/nova-api-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.796811 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-scripts\") pod \"ceilometer-0\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.796835 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-config-data\") pod \"ceilometer-0\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.796883 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c779s\" (UniqueName: \"kubernetes.io/projected/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-kube-api-access-c779s\") pod \"ceilometer-0\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.796936 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-log-httpd\") pod \"ceilometer-0\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.796961 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " pod="openstack/nova-api-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.837160 4693 scope.go:117] "RemoveContainer" containerID="c57fc982e7da26280b4da3a615702607dad9ceb5264526dd38d48c56ca0a9533" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.901991 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-public-tls-certs\") pod \"nova-api-0\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " pod="openstack/nova-api-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.902054 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-config-data\") pod \"nova-api-0\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " pod="openstack/nova-api-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.902097 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.902169 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt894\" (UniqueName: \"kubernetes.io/projected/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-kube-api-access-gt894\") pod \"nova-api-0\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " pod="openstack/nova-api-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.902197 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.902221 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " pod="openstack/nova-api-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.902298 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-logs\") pod \"nova-api-0\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " pod="openstack/nova-api-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.902329 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-scripts\") pod \"ceilometer-0\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.902344 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-config-data\") pod \"ceilometer-0\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.902365 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c779s\" (UniqueName: \"kubernetes.io/projected/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-kube-api-access-c779s\") pod \"ceilometer-0\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.902389 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-log-httpd\") pod \"ceilometer-0\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.902408 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " pod="openstack/nova-api-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.902438 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-run-httpd\") pod \"ceilometer-0\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.902909 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-run-httpd\") pod \"ceilometer-0\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.905232 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-logs\") pod \"nova-api-0\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " pod="openstack/nova-api-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.911574 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.913589 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-public-tls-certs\") pod \"nova-api-0\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " pod="openstack/nova-api-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.913883 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-log-httpd\") pod \"ceilometer-0\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.914222 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " pod="openstack/nova-api-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.915441 4693 scope.go:117] "RemoveContainer" containerID="84c0935b6357be2256a4d54f7e475cadf80d8b3d5c98477ed87493e12be0c57c" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.919251 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.922380 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt894\" (UniqueName: \"kubernetes.io/projected/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-kube-api-access-gt894\") pod \"nova-api-0\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " pod="openstack/nova-api-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.927911 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-config-data\") pod \"nova-api-0\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " pod="openstack/nova-api-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.936035 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-scripts\") pod \"ceilometer-0\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.936194 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c779s\" (UniqueName: \"kubernetes.io/projected/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-kube-api-access-c779s\") pod \"ceilometer-0\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.936572 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-config-data\") pod \"ceilometer-0\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " pod="openstack/ceilometer-0" Dec 12 16:15:18 crc kubenswrapper[4693]: I1212 16:15:18.936604 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " pod="openstack/nova-api-0" Dec 12 16:15:19 crc kubenswrapper[4693]: I1212 16:15:19.061206 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 12 16:15:19 crc kubenswrapper[4693]: I1212 16:15:19.061639 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 12 16:15:19 crc kubenswrapper[4693]: I1212 16:15:19.131521 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 16:15:19 crc kubenswrapper[4693]: I1212 16:15:19.144387 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:15:19 crc kubenswrapper[4693]: I1212 16:15:19.379888 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38afd4c5-3e1b-4a5f-a82c-ac270a369417" path="/var/lib/kubelet/pods/38afd4c5-3e1b-4a5f-a82c-ac270a369417/volumes" Dec 12 16:15:19 crc kubenswrapper[4693]: I1212 16:15:19.382239 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff9d8d2e-ddae-487e-b259-334e7df154d4" path="/var/lib/kubelet/pods/ff9d8d2e-ddae-487e-b259-334e7df154d4/volumes" Dec 12 16:15:19 crc kubenswrapper[4693]: I1212 16:15:19.786369 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:15:19 crc kubenswrapper[4693]: I1212 16:15:19.799572 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 12 16:15:19 crc kubenswrapper[4693]: I1212 16:15:19.886013 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 12 16:15:20 crc kubenswrapper[4693]: I1212 16:15:20.041419 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Dec 12 16:15:20 crc kubenswrapper[4693]: I1212 16:15:20.165209 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:15:20 crc kubenswrapper[4693]: I1212 16:15:20.358001 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:15:20 crc kubenswrapper[4693]: E1212 16:15:20.358360 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:15:20 crc kubenswrapper[4693]: I1212 16:15:20.703049 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"88524e6a-0097-46c6-b3e4-17cb2b97cd7e","Type":"ContainerStarted","Data":"217f5de13971b73eb0fa42399014544ba1a7e94ec8520792bc485a61c0253a4f"} Dec 12 16:15:20 crc kubenswrapper[4693]: I1212 16:15:20.703371 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"88524e6a-0097-46c6-b3e4-17cb2b97cd7e","Type":"ContainerStarted","Data":"9ac440d5435b2ae2681ccb7388118fa04e4382c2980451b041270404e2a333b8"} Dec 12 16:15:20 crc kubenswrapper[4693]: I1212 16:15:20.703383 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"88524e6a-0097-46c6-b3e4-17cb2b97cd7e","Type":"ContainerStarted","Data":"5aedf6b16fda5d63cb2bc24df144c0891fe3c275d8f8b6c788947b22ce209b00"} Dec 12 16:15:20 crc kubenswrapper[4693]: I1212 16:15:20.706677 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58cbcbb8-4d63-4823-a7b2-0b639818ca3d","Type":"ContainerStarted","Data":"d376ae4772daaf4652e2854b7711dc65ebdf8749df1637f4922578a0fdb681fe"} Dec 12 16:15:20 crc kubenswrapper[4693]: I1212 16:15:20.731685 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.731665261 podStartE2EDuration="2.731665261s" podCreationTimestamp="2025-12-12 16:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:15:20.719315657 +0000 UTC m=+1747.887955258" watchObservedRunningTime="2025-12-12 16:15:20.731665261 +0000 UTC m=+1747.900304852" Dec 12 16:15:21 crc kubenswrapper[4693]: I1212 16:15:21.437505 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 12 16:15:21 crc kubenswrapper[4693]: I1212 16:15:21.724700 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:15:21 crc kubenswrapper[4693]: I1212 16:15:21.733064 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db313a41-7be9-4215-846e-d480bbdb0186","Type":"ContainerStarted","Data":"05afa3bbd667438b1f5076c4a88c28988d55f0568e9e656a48b9e0dbb1d58641"} Dec 12 16:15:21 crc kubenswrapper[4693]: I1212 16:15:21.735502 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58cbcbb8-4d63-4823-a7b2-0b639818ca3d","Type":"ContainerStarted","Data":"e679ade68aa7806f403773e664039b34ba0cc9db19788c4e53cc633439b0d36a"} Dec 12 16:15:21 crc kubenswrapper[4693]: I1212 16:15:21.795043 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-962pm"] Dec 12 16:15:21 crc kubenswrapper[4693]: I1212 16:15:21.795501 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-962pm" podUID="9850dbcd-93ea-47b5-a812-9ba8821b8110" containerName="dnsmasq-dns" containerID="cri-o://8b690b5cc50e1756e7f84b80c990b953384b8776b66347598be63f531c66a2b4" gracePeriod=10 Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.653388 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.750261 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58cbcbb8-4d63-4823-a7b2-0b639818ca3d","Type":"ContainerStarted","Data":"a2240948bb380632f151e77a8c06a37a692d435cfa111b21fea9790a952c997c"} Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.756161 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-config\") pod \"9850dbcd-93ea-47b5-a812-9ba8821b8110\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.756261 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-dns-svc\") pod \"9850dbcd-93ea-47b5-a812-9ba8821b8110\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.756315 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-ovsdbserver-nb\") pod \"9850dbcd-93ea-47b5-a812-9ba8821b8110\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.756375 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2q4j\" (UniqueName: \"kubernetes.io/projected/9850dbcd-93ea-47b5-a812-9ba8821b8110-kube-api-access-t2q4j\") pod \"9850dbcd-93ea-47b5-a812-9ba8821b8110\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.756632 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-ovsdbserver-sb\") pod \"9850dbcd-93ea-47b5-a812-9ba8821b8110\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.756692 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-dns-swift-storage-0\") pod \"9850dbcd-93ea-47b5-a812-9ba8821b8110\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.761660 4693 generic.go:334] "Generic (PLEG): container finished" podID="9850dbcd-93ea-47b5-a812-9ba8821b8110" containerID="8b690b5cc50e1756e7f84b80c990b953384b8776b66347598be63f531c66a2b4" exitCode=0 Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.761919 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-962pm" event={"ID":"9850dbcd-93ea-47b5-a812-9ba8821b8110","Type":"ContainerDied","Data":"8b690b5cc50e1756e7f84b80c990b953384b8776b66347598be63f531c66a2b4"} Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.761947 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-962pm" event={"ID":"9850dbcd-93ea-47b5-a812-9ba8821b8110","Type":"ContainerDied","Data":"7117ba545edb2e49cdb881b8603f403554e4676439bc477e78f66a693cbdf94f"} Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.761962 4693 scope.go:117] "RemoveContainer" containerID="8b690b5cc50e1756e7f84b80c990b953384b8776b66347598be63f531c66a2b4" Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.762155 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-962pm" Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.762382 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9850dbcd-93ea-47b5-a812-9ba8821b8110-kube-api-access-t2q4j" (OuterVolumeSpecName: "kube-api-access-t2q4j") pod "9850dbcd-93ea-47b5-a812-9ba8821b8110" (UID: "9850dbcd-93ea-47b5-a812-9ba8821b8110"). InnerVolumeSpecName "kube-api-access-t2q4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.857153 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9850dbcd-93ea-47b5-a812-9ba8821b8110" (UID: "9850dbcd-93ea-47b5-a812-9ba8821b8110"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.857944 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9850dbcd-93ea-47b5-a812-9ba8821b8110" (UID: "9850dbcd-93ea-47b5-a812-9ba8821b8110"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.859161 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-dns-swift-storage-0\") pod \"9850dbcd-93ea-47b5-a812-9ba8821b8110\" (UID: \"9850dbcd-93ea-47b5-a812-9ba8821b8110\") " Dec 12 16:15:22 crc kubenswrapper[4693]: W1212 16:15:22.861458 4693 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/9850dbcd-93ea-47b5-a812-9ba8821b8110/volumes/kubernetes.io~configmap/dns-swift-storage-0 Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.861483 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9850dbcd-93ea-47b5-a812-9ba8821b8110" (UID: "9850dbcd-93ea-47b5-a812-9ba8821b8110"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.861981 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.862023 4693 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.862036 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2q4j\" (UniqueName: \"kubernetes.io/projected/9850dbcd-93ea-47b5-a812-9ba8821b8110-kube-api-access-t2q4j\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.882627 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9850dbcd-93ea-47b5-a812-9ba8821b8110" (UID: "9850dbcd-93ea-47b5-a812-9ba8821b8110"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.903105 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9850dbcd-93ea-47b5-a812-9ba8821b8110" (UID: "9850dbcd-93ea-47b5-a812-9ba8821b8110"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.908766 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-config" (OuterVolumeSpecName: "config") pod "9850dbcd-93ea-47b5-a812-9ba8821b8110" (UID: "9850dbcd-93ea-47b5-a812-9ba8821b8110"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.963449 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.963476 4693 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:22 crc kubenswrapper[4693]: I1212 16:15:22.963485 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9850dbcd-93ea-47b5-a812-9ba8821b8110-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:23 crc kubenswrapper[4693]: I1212 16:15:23.109010 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-962pm"] Dec 12 16:15:23 crc kubenswrapper[4693]: I1212 16:15:23.123588 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-962pm"] Dec 12 16:15:23 crc kubenswrapper[4693]: I1212 16:15:23.382536 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9850dbcd-93ea-47b5-a812-9ba8821b8110" path="/var/lib/kubelet/pods/9850dbcd-93ea-47b5-a812-9ba8821b8110/volumes" Dec 12 16:15:23 crc kubenswrapper[4693]: I1212 16:15:23.707869 4693 scope.go:117] "RemoveContainer" containerID="e8f9699fa6eca2378fdefee886718df35104b69fb380c66f17b82350917426f4" Dec 12 16:15:23 crc kubenswrapper[4693]: I1212 16:15:23.775902 4693 scope.go:117] "RemoveContainer" containerID="8b690b5cc50e1756e7f84b80c990b953384b8776b66347598be63f531c66a2b4" Dec 12 16:15:23 crc kubenswrapper[4693]: E1212 16:15:23.776225 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b690b5cc50e1756e7f84b80c990b953384b8776b66347598be63f531c66a2b4\": container with ID starting with 8b690b5cc50e1756e7f84b80c990b953384b8776b66347598be63f531c66a2b4 not found: ID does not exist" containerID="8b690b5cc50e1756e7f84b80c990b953384b8776b66347598be63f531c66a2b4" Dec 12 16:15:23 crc kubenswrapper[4693]: I1212 16:15:23.776352 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b690b5cc50e1756e7f84b80c990b953384b8776b66347598be63f531c66a2b4"} err="failed to get container status \"8b690b5cc50e1756e7f84b80c990b953384b8776b66347598be63f531c66a2b4\": rpc error: code = NotFound desc = could not find container \"8b690b5cc50e1756e7f84b80c990b953384b8776b66347598be63f531c66a2b4\": container with ID starting with 8b690b5cc50e1756e7f84b80c990b953384b8776b66347598be63f531c66a2b4 not found: ID does not exist" Dec 12 16:15:23 crc kubenswrapper[4693]: I1212 16:15:23.776458 4693 scope.go:117] "RemoveContainer" containerID="e8f9699fa6eca2378fdefee886718df35104b69fb380c66f17b82350917426f4" Dec 12 16:15:23 crc kubenswrapper[4693]: E1212 16:15:23.777719 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8f9699fa6eca2378fdefee886718df35104b69fb380c66f17b82350917426f4\": container with ID starting with e8f9699fa6eca2378fdefee886718df35104b69fb380c66f17b82350917426f4 not found: ID does not exist" containerID="e8f9699fa6eca2378fdefee886718df35104b69fb380c66f17b82350917426f4" Dec 12 16:15:23 crc kubenswrapper[4693]: I1212 16:15:23.777855 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8f9699fa6eca2378fdefee886718df35104b69fb380c66f17b82350917426f4"} err="failed to get container status \"e8f9699fa6eca2378fdefee886718df35104b69fb380c66f17b82350917426f4\": rpc error: code = NotFound desc = could not find container \"e8f9699fa6eca2378fdefee886718df35104b69fb380c66f17b82350917426f4\": container with ID starting with e8f9699fa6eca2378fdefee886718df35104b69fb380c66f17b82350917426f4 not found: ID does not exist" Dec 12 16:15:24 crc kubenswrapper[4693]: I1212 16:15:24.061448 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 12 16:15:24 crc kubenswrapper[4693]: I1212 16:15:24.061753 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 12 16:15:24 crc kubenswrapper[4693]: I1212 16:15:24.799444 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db313a41-7be9-4215-846e-d480bbdb0186","Type":"ContainerStarted","Data":"d7627a800a7e1cfd9479987d744324bb688660b154450196271b45693c115675"} Dec 12 16:15:24 crc kubenswrapper[4693]: I1212 16:15:24.801766 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58cbcbb8-4d63-4823-a7b2-0b639818ca3d","Type":"ContainerStarted","Data":"7d5752c77edd8d0079cea6ebca3b88a4b309e8c37f91bea5f22dd517fbcc89e9"} Dec 12 16:15:24 crc kubenswrapper[4693]: I1212 16:15:24.886352 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 12 16:15:24 crc kubenswrapper[4693]: I1212 16:15:24.926192 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 12 16:15:25 crc kubenswrapper[4693]: I1212 16:15:25.074502 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="459816af-8a53-489f-9a69-fe427a9e2ef3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.246:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 16:15:25 crc kubenswrapper[4693]: I1212 16:15:25.074503 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="459816af-8a53-489f-9a69-fe427a9e2ef3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.246:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 16:15:25 crc kubenswrapper[4693]: I1212 16:15:25.867719 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 12 16:15:26 crc kubenswrapper[4693]: I1212 16:15:26.834235 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58cbcbb8-4d63-4823-a7b2-0b639818ca3d","Type":"ContainerStarted","Data":"b586faad9800b7ae30f7e883b9ec06ca84c8782da41c0029b48131f74ff4b759"} Dec 12 16:15:26 crc kubenswrapper[4693]: I1212 16:15:26.834867 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 12 16:15:26 crc kubenswrapper[4693]: I1212 16:15:26.834711 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58cbcbb8-4d63-4823-a7b2-0b639818ca3d" containerName="ceilometer-central-agent" containerID="cri-o://e679ade68aa7806f403773e664039b34ba0cc9db19788c4e53cc633439b0d36a" gracePeriod=30 Dec 12 16:15:26 crc kubenswrapper[4693]: I1212 16:15:26.835007 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58cbcbb8-4d63-4823-a7b2-0b639818ca3d" containerName="proxy-httpd" containerID="cri-o://b586faad9800b7ae30f7e883b9ec06ca84c8782da41c0029b48131f74ff4b759" gracePeriod=30 Dec 12 16:15:26 crc kubenswrapper[4693]: I1212 16:15:26.835166 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58cbcbb8-4d63-4823-a7b2-0b639818ca3d" containerName="ceilometer-notification-agent" containerID="cri-o://a2240948bb380632f151e77a8c06a37a692d435cfa111b21fea9790a952c997c" gracePeriod=30 Dec 12 16:15:26 crc kubenswrapper[4693]: I1212 16:15:26.835228 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58cbcbb8-4d63-4823-a7b2-0b639818ca3d" containerName="sg-core" containerID="cri-o://7d5752c77edd8d0079cea6ebca3b88a4b309e8c37f91bea5f22dd517fbcc89e9" gracePeriod=30 Dec 12 16:15:26 crc kubenswrapper[4693]: I1212 16:15:26.845082 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db313a41-7be9-4215-846e-d480bbdb0186","Type":"ContainerStarted","Data":"6a35e6abd6d171c735a1a0556783693be5c13e6c8e4092e15f7fe988d2094422"} Dec 12 16:15:26 crc kubenswrapper[4693]: I1212 16:15:26.845306 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="db313a41-7be9-4215-846e-d480bbdb0186" containerName="aodh-api" containerID="cri-o://fbf0bd6a4f399fc4544f70f70af6645033e67a4a5fbf5c9be0fbb9243a343c89" gracePeriod=30 Dec 12 16:15:26 crc kubenswrapper[4693]: I1212 16:15:26.845433 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="db313a41-7be9-4215-846e-d480bbdb0186" containerName="aodh-listener" containerID="cri-o://6a35e6abd6d171c735a1a0556783693be5c13e6c8e4092e15f7fe988d2094422" gracePeriod=30 Dec 12 16:15:26 crc kubenswrapper[4693]: I1212 16:15:26.845468 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="db313a41-7be9-4215-846e-d480bbdb0186" containerName="aodh-notifier" containerID="cri-o://d7627a800a7e1cfd9479987d744324bb688660b154450196271b45693c115675" gracePeriod=30 Dec 12 16:15:26 crc kubenswrapper[4693]: I1212 16:15:26.845501 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="db313a41-7be9-4215-846e-d480bbdb0186" containerName="aodh-evaluator" containerID="cri-o://05afa3bbd667438b1f5076c4a88c28988d55f0568e9e656a48b9e0dbb1d58641" gracePeriod=30 Dec 12 16:15:26 crc kubenswrapper[4693]: I1212 16:15:26.876012 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.824076589 podStartE2EDuration="8.875989486s" podCreationTimestamp="2025-12-12 16:15:18 +0000 UTC" firstStartedPulling="2025-12-12 16:15:19.819607994 +0000 UTC m=+1746.988247595" lastFinishedPulling="2025-12-12 16:15:25.871520891 +0000 UTC m=+1753.040160492" observedRunningTime="2025-12-12 16:15:26.857606809 +0000 UTC m=+1754.026246410" watchObservedRunningTime="2025-12-12 16:15:26.875989486 +0000 UTC m=+1754.044629087" Dec 12 16:15:26 crc kubenswrapper[4693]: I1212 16:15:26.908505 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.027677581 podStartE2EDuration="10.908486554s" podCreationTimestamp="2025-12-12 16:15:16 +0000 UTC" firstStartedPulling="2025-12-12 16:15:17.539778801 +0000 UTC m=+1744.708418402" lastFinishedPulling="2025-12-12 16:15:26.420587774 +0000 UTC m=+1753.589227375" observedRunningTime="2025-12-12 16:15:26.900154599 +0000 UTC m=+1754.068794200" watchObservedRunningTime="2025-12-12 16:15:26.908486554 +0000 UTC m=+1754.077126155" Dec 12 16:15:27 crc kubenswrapper[4693]: I1212 16:15:27.859032 4693 generic.go:334] "Generic (PLEG): container finished" podID="db313a41-7be9-4215-846e-d480bbdb0186" containerID="d7627a800a7e1cfd9479987d744324bb688660b154450196271b45693c115675" exitCode=0 Dec 12 16:15:27 crc kubenswrapper[4693]: I1212 16:15:27.859343 4693 generic.go:334] "Generic (PLEG): container finished" podID="db313a41-7be9-4215-846e-d480bbdb0186" containerID="05afa3bbd667438b1f5076c4a88c28988d55f0568e9e656a48b9e0dbb1d58641" exitCode=0 Dec 12 16:15:27 crc kubenswrapper[4693]: I1212 16:15:27.859354 4693 generic.go:334] "Generic (PLEG): container finished" podID="db313a41-7be9-4215-846e-d480bbdb0186" containerID="fbf0bd6a4f399fc4544f70f70af6645033e67a4a5fbf5c9be0fbb9243a343c89" exitCode=0 Dec 12 16:15:27 crc kubenswrapper[4693]: I1212 16:15:27.859108 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db313a41-7be9-4215-846e-d480bbdb0186","Type":"ContainerDied","Data":"d7627a800a7e1cfd9479987d744324bb688660b154450196271b45693c115675"} Dec 12 16:15:27 crc kubenswrapper[4693]: I1212 16:15:27.859436 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db313a41-7be9-4215-846e-d480bbdb0186","Type":"ContainerDied","Data":"05afa3bbd667438b1f5076c4a88c28988d55f0568e9e656a48b9e0dbb1d58641"} Dec 12 16:15:27 crc kubenswrapper[4693]: I1212 16:15:27.859453 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db313a41-7be9-4215-846e-d480bbdb0186","Type":"ContainerDied","Data":"fbf0bd6a4f399fc4544f70f70af6645033e67a4a5fbf5c9be0fbb9243a343c89"} Dec 12 16:15:27 crc kubenswrapper[4693]: I1212 16:15:27.862688 4693 generic.go:334] "Generic (PLEG): container finished" podID="58cbcbb8-4d63-4823-a7b2-0b639818ca3d" containerID="b586faad9800b7ae30f7e883b9ec06ca84c8782da41c0029b48131f74ff4b759" exitCode=0 Dec 12 16:15:27 crc kubenswrapper[4693]: I1212 16:15:27.862725 4693 generic.go:334] "Generic (PLEG): container finished" podID="58cbcbb8-4d63-4823-a7b2-0b639818ca3d" containerID="7d5752c77edd8d0079cea6ebca3b88a4b309e8c37f91bea5f22dd517fbcc89e9" exitCode=2 Dec 12 16:15:27 crc kubenswrapper[4693]: I1212 16:15:27.862724 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58cbcbb8-4d63-4823-a7b2-0b639818ca3d","Type":"ContainerDied","Data":"b586faad9800b7ae30f7e883b9ec06ca84c8782da41c0029b48131f74ff4b759"} Dec 12 16:15:27 crc kubenswrapper[4693]: I1212 16:15:27.862758 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58cbcbb8-4d63-4823-a7b2-0b639818ca3d","Type":"ContainerDied","Data":"7d5752c77edd8d0079cea6ebca3b88a4b309e8c37f91bea5f22dd517fbcc89e9"} Dec 12 16:15:27 crc kubenswrapper[4693]: I1212 16:15:27.862774 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58cbcbb8-4d63-4823-a7b2-0b639818ca3d","Type":"ContainerDied","Data":"a2240948bb380632f151e77a8c06a37a692d435cfa111b21fea9790a952c997c"} Dec 12 16:15:27 crc kubenswrapper[4693]: I1212 16:15:27.862736 4693 generic.go:334] "Generic (PLEG): container finished" podID="58cbcbb8-4d63-4823-a7b2-0b639818ca3d" containerID="a2240948bb380632f151e77a8c06a37a692d435cfa111b21fea9790a952c997c" exitCode=0 Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.132506 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.132827 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.884909 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.894377 4693 generic.go:334] "Generic (PLEG): container finished" podID="870d139b-dca4-4055-acd5-6b264b9cf889" containerID="254f571686df56256997b8911120c0474ece6d2e0c11fa4b24eba51c7ba42d8f" exitCode=137 Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.894440 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"870d139b-dca4-4055-acd5-6b264b9cf889","Type":"ContainerDied","Data":"254f571686df56256997b8911120c0474ece6d2e0c11fa4b24eba51c7ba42d8f"} Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.898395 4693 generic.go:334] "Generic (PLEG): container finished" podID="58cbcbb8-4d63-4823-a7b2-0b639818ca3d" containerID="e679ade68aa7806f403773e664039b34ba0cc9db19788c4e53cc633439b0d36a" exitCode=0 Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.898433 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58cbcbb8-4d63-4823-a7b2-0b639818ca3d","Type":"ContainerDied","Data":"e679ade68aa7806f403773e664039b34ba0cc9db19788c4e53cc633439b0d36a"} Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.898473 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58cbcbb8-4d63-4823-a7b2-0b639818ca3d","Type":"ContainerDied","Data":"d376ae4772daaf4652e2854b7711dc65ebdf8749df1637f4922578a0fdb681fe"} Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.898486 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.898494 4693 scope.go:117] "RemoveContainer" containerID="b586faad9800b7ae30f7e883b9ec06ca84c8782da41c0029b48131f74ff4b759" Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.942890 4693 scope.go:117] "RemoveContainer" containerID="7d5752c77edd8d0079cea6ebca3b88a4b309e8c37f91bea5f22dd517fbcc89e9" Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.969917 4693 scope.go:117] "RemoveContainer" containerID="a2240948bb380632f151e77a8c06a37a692d435cfa111b21fea9790a952c997c" Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.973261 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-run-httpd\") pod \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.973406 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-combined-ca-bundle\") pod \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.973487 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-scripts\") pod \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.973548 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-config-data\") pod \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.973784 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-log-httpd\") pod \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.973823 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c779s\" (UniqueName: \"kubernetes.io/projected/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-kube-api-access-c779s\") pod \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.973826 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "58cbcbb8-4d63-4823-a7b2-0b639818ca3d" (UID: "58cbcbb8-4d63-4823-a7b2-0b639818ca3d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.973888 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-sg-core-conf-yaml\") pod \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\" (UID: \"58cbcbb8-4d63-4823-a7b2-0b639818ca3d\") " Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.974582 4693 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.975260 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "58cbcbb8-4d63-4823-a7b2-0b639818ca3d" (UID: "58cbcbb8-4d63-4823-a7b2-0b639818ca3d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:15:29 crc kubenswrapper[4693]: I1212 16:15:29.979795 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-kube-api-access-c779s" (OuterVolumeSpecName: "kube-api-access-c779s") pod "58cbcbb8-4d63-4823-a7b2-0b639818ca3d" (UID: "58cbcbb8-4d63-4823-a7b2-0b639818ca3d"). InnerVolumeSpecName "kube-api-access-c779s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.006287 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-scripts" (OuterVolumeSpecName: "scripts") pod "58cbcbb8-4d63-4823-a7b2-0b639818ca3d" (UID: "58cbcbb8-4d63-4823-a7b2-0b639818ca3d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.017372 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "58cbcbb8-4d63-4823-a7b2-0b639818ca3d" (UID: "58cbcbb8-4d63-4823-a7b2-0b639818ca3d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.077009 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.077051 4693 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.077066 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c779s\" (UniqueName: \"kubernetes.io/projected/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-kube-api-access-c779s\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.077079 4693 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.100383 4693 scope.go:117] "RemoveContainer" containerID="e679ade68aa7806f403773e664039b34ba0cc9db19788c4e53cc633439b0d36a" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.142772 4693 scope.go:117] "RemoveContainer" containerID="b586faad9800b7ae30f7e883b9ec06ca84c8782da41c0029b48131f74ff4b759" Dec 12 16:15:30 crc kubenswrapper[4693]: E1212 16:15:30.143215 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b586faad9800b7ae30f7e883b9ec06ca84c8782da41c0029b48131f74ff4b759\": container with ID starting with b586faad9800b7ae30f7e883b9ec06ca84c8782da41c0029b48131f74ff4b759 not found: ID does not exist" containerID="b586faad9800b7ae30f7e883b9ec06ca84c8782da41c0029b48131f74ff4b759" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.143236 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b586faad9800b7ae30f7e883b9ec06ca84c8782da41c0029b48131f74ff4b759"} err="failed to get container status \"b586faad9800b7ae30f7e883b9ec06ca84c8782da41c0029b48131f74ff4b759\": rpc error: code = NotFound desc = could not find container \"b586faad9800b7ae30f7e883b9ec06ca84c8782da41c0029b48131f74ff4b759\": container with ID starting with b586faad9800b7ae30f7e883b9ec06ca84c8782da41c0029b48131f74ff4b759 not found: ID does not exist" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.143259 4693 scope.go:117] "RemoveContainer" containerID="7d5752c77edd8d0079cea6ebca3b88a4b309e8c37f91bea5f22dd517fbcc89e9" Dec 12 16:15:30 crc kubenswrapper[4693]: E1212 16:15:30.143807 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d5752c77edd8d0079cea6ebca3b88a4b309e8c37f91bea5f22dd517fbcc89e9\": container with ID starting with 7d5752c77edd8d0079cea6ebca3b88a4b309e8c37f91bea5f22dd517fbcc89e9 not found: ID does not exist" containerID="7d5752c77edd8d0079cea6ebca3b88a4b309e8c37f91bea5f22dd517fbcc89e9" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.143825 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5752c77edd8d0079cea6ebca3b88a4b309e8c37f91bea5f22dd517fbcc89e9"} err="failed to get container status \"7d5752c77edd8d0079cea6ebca3b88a4b309e8c37f91bea5f22dd517fbcc89e9\": rpc error: code = NotFound desc = could not find container \"7d5752c77edd8d0079cea6ebca3b88a4b309e8c37f91bea5f22dd517fbcc89e9\": container with ID starting with 7d5752c77edd8d0079cea6ebca3b88a4b309e8c37f91bea5f22dd517fbcc89e9 not found: ID does not exist" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.143839 4693 scope.go:117] "RemoveContainer" containerID="a2240948bb380632f151e77a8c06a37a692d435cfa111b21fea9790a952c997c" Dec 12 16:15:30 crc kubenswrapper[4693]: E1212 16:15:30.144015 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2240948bb380632f151e77a8c06a37a692d435cfa111b21fea9790a952c997c\": container with ID starting with a2240948bb380632f151e77a8c06a37a692d435cfa111b21fea9790a952c997c not found: ID does not exist" containerID="a2240948bb380632f151e77a8c06a37a692d435cfa111b21fea9790a952c997c" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.144031 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2240948bb380632f151e77a8c06a37a692d435cfa111b21fea9790a952c997c"} err="failed to get container status \"a2240948bb380632f151e77a8c06a37a692d435cfa111b21fea9790a952c997c\": rpc error: code = NotFound desc = could not find container \"a2240948bb380632f151e77a8c06a37a692d435cfa111b21fea9790a952c997c\": container with ID starting with a2240948bb380632f151e77a8c06a37a692d435cfa111b21fea9790a952c997c not found: ID does not exist" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.144044 4693 scope.go:117] "RemoveContainer" containerID="e679ade68aa7806f403773e664039b34ba0cc9db19788c4e53cc633439b0d36a" Dec 12 16:15:30 crc kubenswrapper[4693]: E1212 16:15:30.144220 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e679ade68aa7806f403773e664039b34ba0cc9db19788c4e53cc633439b0d36a\": container with ID starting with e679ade68aa7806f403773e664039b34ba0cc9db19788c4e53cc633439b0d36a not found: ID does not exist" containerID="e679ade68aa7806f403773e664039b34ba0cc9db19788c4e53cc633439b0d36a" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.144235 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e679ade68aa7806f403773e664039b34ba0cc9db19788c4e53cc633439b0d36a"} err="failed to get container status \"e679ade68aa7806f403773e664039b34ba0cc9db19788c4e53cc633439b0d36a\": rpc error: code = NotFound desc = could not find container \"e679ade68aa7806f403773e664039b34ba0cc9db19788c4e53cc633439b0d36a\": container with ID starting with e679ade68aa7806f403773e664039b34ba0cc9db19788c4e53cc633439b0d36a not found: ID does not exist" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.147837 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="88524e6a-0097-46c6-b3e4-17cb2b97cd7e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.249:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.148173 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="88524e6a-0097-46c6-b3e4-17cb2b97cd7e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.249:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.180782 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-config-data" (OuterVolumeSpecName: "config-data") pod "58cbcbb8-4d63-4823-a7b2-0b639818ca3d" (UID: "58cbcbb8-4d63-4823-a7b2-0b639818ca3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.184484 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58cbcbb8-4d63-4823-a7b2-0b639818ca3d" (UID: "58cbcbb8-4d63-4823-a7b2-0b639818ca3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.281559 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.281593 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58cbcbb8-4d63-4823-a7b2-0b639818ca3d-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.302056 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.325061 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.343825 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.382609 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzcf9\" (UniqueName: \"kubernetes.io/projected/870d139b-dca4-4055-acd5-6b264b9cf889-kube-api-access-dzcf9\") pod \"870d139b-dca4-4055-acd5-6b264b9cf889\" (UID: \"870d139b-dca4-4055-acd5-6b264b9cf889\") " Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.382666 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/870d139b-dca4-4055-acd5-6b264b9cf889-config-data\") pod \"870d139b-dca4-4055-acd5-6b264b9cf889\" (UID: \"870d139b-dca4-4055-acd5-6b264b9cf889\") " Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.382892 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/870d139b-dca4-4055-acd5-6b264b9cf889-combined-ca-bundle\") pod \"870d139b-dca4-4055-acd5-6b264b9cf889\" (UID: \"870d139b-dca4-4055-acd5-6b264b9cf889\") " Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.388571 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/870d139b-dca4-4055-acd5-6b264b9cf889-kube-api-access-dzcf9" (OuterVolumeSpecName: "kube-api-access-dzcf9") pod "870d139b-dca4-4055-acd5-6b264b9cf889" (UID: "870d139b-dca4-4055-acd5-6b264b9cf889"). InnerVolumeSpecName "kube-api-access-dzcf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.401075 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:15:30 crc kubenswrapper[4693]: E1212 16:15:30.401817 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58cbcbb8-4d63-4823-a7b2-0b639818ca3d" containerName="proxy-httpd" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.401847 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="58cbcbb8-4d63-4823-a7b2-0b639818ca3d" containerName="proxy-httpd" Dec 12 16:15:30 crc kubenswrapper[4693]: E1212 16:15:30.401897 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870d139b-dca4-4055-acd5-6b264b9cf889" containerName="nova-cell1-novncproxy-novncproxy" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.401906 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="870d139b-dca4-4055-acd5-6b264b9cf889" containerName="nova-cell1-novncproxy-novncproxy" Dec 12 16:15:30 crc kubenswrapper[4693]: E1212 16:15:30.401928 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9850dbcd-93ea-47b5-a812-9ba8821b8110" containerName="dnsmasq-dns" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.401936 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="9850dbcd-93ea-47b5-a812-9ba8821b8110" containerName="dnsmasq-dns" Dec 12 16:15:30 crc kubenswrapper[4693]: E1212 16:15:30.401948 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9850dbcd-93ea-47b5-a812-9ba8821b8110" containerName="init" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.401956 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="9850dbcd-93ea-47b5-a812-9ba8821b8110" containerName="init" Dec 12 16:15:30 crc kubenswrapper[4693]: E1212 16:15:30.401973 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58cbcbb8-4d63-4823-a7b2-0b639818ca3d" containerName="ceilometer-central-agent" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.401981 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="58cbcbb8-4d63-4823-a7b2-0b639818ca3d" containerName="ceilometer-central-agent" Dec 12 16:15:30 crc kubenswrapper[4693]: E1212 16:15:30.401997 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58cbcbb8-4d63-4823-a7b2-0b639818ca3d" containerName="ceilometer-notification-agent" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.402004 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="58cbcbb8-4d63-4823-a7b2-0b639818ca3d" containerName="ceilometer-notification-agent" Dec 12 16:15:30 crc kubenswrapper[4693]: E1212 16:15:30.402026 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58cbcbb8-4d63-4823-a7b2-0b639818ca3d" containerName="sg-core" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.402034 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="58cbcbb8-4d63-4823-a7b2-0b639818ca3d" containerName="sg-core" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.402387 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="58cbcbb8-4d63-4823-a7b2-0b639818ca3d" containerName="sg-core" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.402426 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="58cbcbb8-4d63-4823-a7b2-0b639818ca3d" containerName="proxy-httpd" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.402453 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="9850dbcd-93ea-47b5-a812-9ba8821b8110" containerName="dnsmasq-dns" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.402465 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="870d139b-dca4-4055-acd5-6b264b9cf889" containerName="nova-cell1-novncproxy-novncproxy" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.402475 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="58cbcbb8-4d63-4823-a7b2-0b639818ca3d" containerName="ceilometer-notification-agent" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.402492 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="58cbcbb8-4d63-4823-a7b2-0b639818ca3d" containerName="ceilometer-central-agent" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.405587 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.411112 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.413301 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.423530 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/870d139b-dca4-4055-acd5-6b264b9cf889-config-data" (OuterVolumeSpecName: "config-data") pod "870d139b-dca4-4055-acd5-6b264b9cf889" (UID: "870d139b-dca4-4055-acd5-6b264b9cf889"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.438645 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.457759 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/870d139b-dca4-4055-acd5-6b264b9cf889-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "870d139b-dca4-4055-acd5-6b264b9cf889" (UID: "870d139b-dca4-4055-acd5-6b264b9cf889"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.486797 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzcf9\" (UniqueName: \"kubernetes.io/projected/870d139b-dca4-4055-acd5-6b264b9cf889-kube-api-access-dzcf9\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.486840 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/870d139b-dca4-4055-acd5-6b264b9cf889-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.486853 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/870d139b-dca4-4055-acd5-6b264b9cf889-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.589446 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.589509 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a36bedaf-490a-4ae4-bb82-653f88f8d791-run-httpd\") pod \"ceilometer-0\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.589555 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtmht\" (UniqueName: \"kubernetes.io/projected/a36bedaf-490a-4ae4-bb82-653f88f8d791-kube-api-access-gtmht\") pod \"ceilometer-0\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.589738 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-scripts\") pod \"ceilometer-0\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.589822 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.589964 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a36bedaf-490a-4ae4-bb82-653f88f8d791-log-httpd\") pod \"ceilometer-0\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.590115 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-config-data\") pod \"ceilometer-0\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.692481 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtmht\" (UniqueName: \"kubernetes.io/projected/a36bedaf-490a-4ae4-bb82-653f88f8d791-kube-api-access-gtmht\") pod \"ceilometer-0\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.692574 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-scripts\") pod \"ceilometer-0\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.692616 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.692653 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a36bedaf-490a-4ae4-bb82-653f88f8d791-log-httpd\") pod \"ceilometer-0\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.692714 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-config-data\") pod \"ceilometer-0\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.692806 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.692850 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a36bedaf-490a-4ae4-bb82-653f88f8d791-run-httpd\") pod \"ceilometer-0\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.693542 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a36bedaf-490a-4ae4-bb82-653f88f8d791-run-httpd\") pod \"ceilometer-0\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.693549 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a36bedaf-490a-4ae4-bb82-653f88f8d791-log-httpd\") pod \"ceilometer-0\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.696813 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.697162 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-config-data\") pod \"ceilometer-0\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.697670 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.701079 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-scripts\") pod \"ceilometer-0\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.714267 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtmht\" (UniqueName: \"kubernetes.io/projected/a36bedaf-490a-4ae4-bb82-653f88f8d791-kube-api-access-gtmht\") pod \"ceilometer-0\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.878160 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.914781 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.914867 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"870d139b-dca4-4055-acd5-6b264b9cf889","Type":"ContainerDied","Data":"069be7727790fb35325d4cba4b3eb425ed3d3591114a932e50945141d7b78ef1"} Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.915006 4693 scope.go:117] "RemoveContainer" containerID="254f571686df56256997b8911120c0474ece6d2e0c11fa4b24eba51c7ba42d8f" Dec 12 16:15:30 crc kubenswrapper[4693]: I1212 16:15:30.975997 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.009512 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.081060 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.083571 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.086775 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.087083 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.093321 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.113021 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.209108 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.209200 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8bqd\" (UniqueName: \"kubernetes.io/projected/8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32-kube-api-access-v8bqd\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.209230 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.209641 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.209717 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.312600 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.312710 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8bqd\" (UniqueName: \"kubernetes.io/projected/8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32-kube-api-access-v8bqd\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.312737 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.312853 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.312894 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.318147 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.318431 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.320954 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.326996 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.329741 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8bqd\" (UniqueName: \"kubernetes.io/projected/8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32-kube-api-access-v8bqd\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.374917 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58cbcbb8-4d63-4823-a7b2-0b639818ca3d" path="/var/lib/kubelet/pods/58cbcbb8-4d63-4823-a7b2-0b639818ca3d/volumes" Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.376444 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="870d139b-dca4-4055-acd5-6b264b9cf889" path="/var/lib/kubelet/pods/870d139b-dca4-4055-acd5-6b264b9cf889/volumes" Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.405729 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.443307 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.934574 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 12 16:15:31 crc kubenswrapper[4693]: W1212 16:15:31.946822 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c3b3a9b_9b0e_4f6e_b269_20031a8e1f32.slice/crio-57b29760b8df9ba8845d68e7cc50da340d8818fd56aca432b9513b90783540fd WatchSource:0}: Error finding container 57b29760b8df9ba8845d68e7cc50da340d8818fd56aca432b9513b90783540fd: Status 404 returned error can't find the container with id 57b29760b8df9ba8845d68e7cc50da340d8818fd56aca432b9513b90783540fd Dec 12 16:15:31 crc kubenswrapper[4693]: I1212 16:15:31.948476 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a36bedaf-490a-4ae4-bb82-653f88f8d791","Type":"ContainerStarted","Data":"9eba8295e180ce2ccf33fc9049b2e76fc93903895f58ff04f0e4ffd722db0340"} Dec 12 16:15:32 crc kubenswrapper[4693]: I1212 16:15:32.358165 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:15:32 crc kubenswrapper[4693]: E1212 16:15:32.358651 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:15:32 crc kubenswrapper[4693]: I1212 16:15:32.962065 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32","Type":"ContainerStarted","Data":"d5e3a3db70e6b68b69f167c528225d0d669619e99389ccd410342f34638b6292"} Dec 12 16:15:32 crc kubenswrapper[4693]: I1212 16:15:32.962530 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8c3b3a9b-9b0e-4f6e-b269-20031a8e1f32","Type":"ContainerStarted","Data":"57b29760b8df9ba8845d68e7cc50da340d8818fd56aca432b9513b90783540fd"} Dec 12 16:15:32 crc kubenswrapper[4693]: I1212 16:15:32.964712 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a36bedaf-490a-4ae4-bb82-653f88f8d791","Type":"ContainerStarted","Data":"5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849"} Dec 12 16:15:33 crc kubenswrapper[4693]: I1212 16:15:33.001621 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.001603177 podStartE2EDuration="3.001603177s" podCreationTimestamp="2025-12-12 16:15:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:15:32.990016944 +0000 UTC m=+1760.158656545" watchObservedRunningTime="2025-12-12 16:15:33.001603177 +0000 UTC m=+1760.170242778" Dec 12 16:15:33 crc kubenswrapper[4693]: I1212 16:15:33.982343 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a36bedaf-490a-4ae4-bb82-653f88f8d791","Type":"ContainerStarted","Data":"9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae"} Dec 12 16:15:34 crc kubenswrapper[4693]: I1212 16:15:34.067712 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 12 16:15:34 crc kubenswrapper[4693]: I1212 16:15:34.072690 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 12 16:15:34 crc kubenswrapper[4693]: I1212 16:15:34.087731 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 12 16:15:35 crc kubenswrapper[4693]: I1212 16:15:35.012408 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a36bedaf-490a-4ae4-bb82-653f88f8d791","Type":"ContainerStarted","Data":"0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4"} Dec 12 16:15:35 crc kubenswrapper[4693]: I1212 16:15:35.018925 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 12 16:15:36 crc kubenswrapper[4693]: I1212 16:15:36.028076 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a36bedaf-490a-4ae4-bb82-653f88f8d791","Type":"ContainerStarted","Data":"48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a"} Dec 12 16:15:36 crc kubenswrapper[4693]: I1212 16:15:36.028512 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 12 16:15:36 crc kubenswrapper[4693]: I1212 16:15:36.065506 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.84909226 podStartE2EDuration="6.065478206s" podCreationTimestamp="2025-12-12 16:15:30 +0000 UTC" firstStartedPulling="2025-12-12 16:15:31.455687014 +0000 UTC m=+1758.624326615" lastFinishedPulling="2025-12-12 16:15:35.67207296 +0000 UTC m=+1762.840712561" observedRunningTime="2025-12-12 16:15:36.063137852 +0000 UTC m=+1763.231777463" watchObservedRunningTime="2025-12-12 16:15:36.065478206 +0000 UTC m=+1763.234117807" Dec 12 16:15:36 crc kubenswrapper[4693]: I1212 16:15:36.406561 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:39 crc kubenswrapper[4693]: I1212 16:15:39.142141 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 12 16:15:39 crc kubenswrapper[4693]: I1212 16:15:39.143215 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 12 16:15:39 crc kubenswrapper[4693]: I1212 16:15:39.144748 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 12 16:15:39 crc kubenswrapper[4693]: I1212 16:15:39.150689 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 12 16:15:40 crc kubenswrapper[4693]: I1212 16:15:40.082323 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 12 16:15:40 crc kubenswrapper[4693]: I1212 16:15:40.088295 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 12 16:15:41 crc kubenswrapper[4693]: I1212 16:15:41.406463 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:41 crc kubenswrapper[4693]: I1212 16:15:41.441683 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:42 crc kubenswrapper[4693]: I1212 16:15:42.155537 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 12 16:15:42 crc kubenswrapper[4693]: I1212 16:15:42.360448 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-nrzkw"] Dec 12 16:15:42 crc kubenswrapper[4693]: I1212 16:15:42.362037 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nrzkw" Dec 12 16:15:42 crc kubenswrapper[4693]: I1212 16:15:42.365292 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 12 16:15:42 crc kubenswrapper[4693]: I1212 16:15:42.365463 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 12 16:15:42 crc kubenswrapper[4693]: I1212 16:15:42.383769 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-nrzkw"] Dec 12 16:15:42 crc kubenswrapper[4693]: I1212 16:15:42.518200 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e1cc20-3fe3-4674-97a4-0489caa4588e-config-data\") pod \"nova-cell1-cell-mapping-nrzkw\" (UID: \"21e1cc20-3fe3-4674-97a4-0489caa4588e\") " pod="openstack/nova-cell1-cell-mapping-nrzkw" Dec 12 16:15:42 crc kubenswrapper[4693]: I1212 16:15:42.518353 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rct5n\" (UniqueName: \"kubernetes.io/projected/21e1cc20-3fe3-4674-97a4-0489caa4588e-kube-api-access-rct5n\") pod \"nova-cell1-cell-mapping-nrzkw\" (UID: \"21e1cc20-3fe3-4674-97a4-0489caa4588e\") " pod="openstack/nova-cell1-cell-mapping-nrzkw" Dec 12 16:15:42 crc kubenswrapper[4693]: I1212 16:15:42.518399 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21e1cc20-3fe3-4674-97a4-0489caa4588e-scripts\") pod \"nova-cell1-cell-mapping-nrzkw\" (UID: \"21e1cc20-3fe3-4674-97a4-0489caa4588e\") " pod="openstack/nova-cell1-cell-mapping-nrzkw" Dec 12 16:15:42 crc kubenswrapper[4693]: I1212 16:15:42.518422 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e1cc20-3fe3-4674-97a4-0489caa4588e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nrzkw\" (UID: \"21e1cc20-3fe3-4674-97a4-0489caa4588e\") " pod="openstack/nova-cell1-cell-mapping-nrzkw" Dec 12 16:15:42 crc kubenswrapper[4693]: I1212 16:15:42.621252 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e1cc20-3fe3-4674-97a4-0489caa4588e-config-data\") pod \"nova-cell1-cell-mapping-nrzkw\" (UID: \"21e1cc20-3fe3-4674-97a4-0489caa4588e\") " pod="openstack/nova-cell1-cell-mapping-nrzkw" Dec 12 16:15:42 crc kubenswrapper[4693]: I1212 16:15:42.621354 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rct5n\" (UniqueName: \"kubernetes.io/projected/21e1cc20-3fe3-4674-97a4-0489caa4588e-kube-api-access-rct5n\") pod \"nova-cell1-cell-mapping-nrzkw\" (UID: \"21e1cc20-3fe3-4674-97a4-0489caa4588e\") " pod="openstack/nova-cell1-cell-mapping-nrzkw" Dec 12 16:15:42 crc kubenswrapper[4693]: I1212 16:15:42.621385 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21e1cc20-3fe3-4674-97a4-0489caa4588e-scripts\") pod \"nova-cell1-cell-mapping-nrzkw\" (UID: \"21e1cc20-3fe3-4674-97a4-0489caa4588e\") " pod="openstack/nova-cell1-cell-mapping-nrzkw" Dec 12 16:15:42 crc kubenswrapper[4693]: I1212 16:15:42.621400 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e1cc20-3fe3-4674-97a4-0489caa4588e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nrzkw\" (UID: \"21e1cc20-3fe3-4674-97a4-0489caa4588e\") " pod="openstack/nova-cell1-cell-mapping-nrzkw" Dec 12 16:15:42 crc kubenswrapper[4693]: I1212 16:15:42.680341 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e1cc20-3fe3-4674-97a4-0489caa4588e-config-data\") pod \"nova-cell1-cell-mapping-nrzkw\" (UID: \"21e1cc20-3fe3-4674-97a4-0489caa4588e\") " pod="openstack/nova-cell1-cell-mapping-nrzkw" Dec 12 16:15:42 crc kubenswrapper[4693]: I1212 16:15:42.684059 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21e1cc20-3fe3-4674-97a4-0489caa4588e-scripts\") pod \"nova-cell1-cell-mapping-nrzkw\" (UID: \"21e1cc20-3fe3-4674-97a4-0489caa4588e\") " pod="openstack/nova-cell1-cell-mapping-nrzkw" Dec 12 16:15:42 crc kubenswrapper[4693]: I1212 16:15:42.704882 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e1cc20-3fe3-4674-97a4-0489caa4588e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nrzkw\" (UID: \"21e1cc20-3fe3-4674-97a4-0489caa4588e\") " pod="openstack/nova-cell1-cell-mapping-nrzkw" Dec 12 16:15:42 crc kubenswrapper[4693]: I1212 16:15:42.723852 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rct5n\" (UniqueName: \"kubernetes.io/projected/21e1cc20-3fe3-4674-97a4-0489caa4588e-kube-api-access-rct5n\") pod \"nova-cell1-cell-mapping-nrzkw\" (UID: \"21e1cc20-3fe3-4674-97a4-0489caa4588e\") " pod="openstack/nova-cell1-cell-mapping-nrzkw" Dec 12 16:15:42 crc kubenswrapper[4693]: I1212 16:15:42.981954 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nrzkw" Dec 12 16:15:43 crc kubenswrapper[4693]: I1212 16:15:43.562814 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-nrzkw"] Dec 12 16:15:44 crc kubenswrapper[4693]: I1212 16:15:44.131401 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nrzkw" event={"ID":"21e1cc20-3fe3-4674-97a4-0489caa4588e","Type":"ContainerStarted","Data":"cfb2ec80fd83004328cdf174da11f06eb57d3839a8b3d4b0563de3d5f4204387"} Dec 12 16:15:44 crc kubenswrapper[4693]: I1212 16:15:44.131716 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nrzkw" event={"ID":"21e1cc20-3fe3-4674-97a4-0489caa4588e","Type":"ContainerStarted","Data":"2641f1eb2c55c3061ff720a97545048986bec7a2dc2b7307968560f28395b38e"} Dec 12 16:15:44 crc kubenswrapper[4693]: I1212 16:15:44.150331 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-nrzkw" podStartSLOduration=2.150305461 podStartE2EDuration="2.150305461s" podCreationTimestamp="2025-12-12 16:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:15:44.148996066 +0000 UTC m=+1771.317635667" watchObservedRunningTime="2025-12-12 16:15:44.150305461 +0000 UTC m=+1771.318945092" Dec 12 16:15:47 crc kubenswrapper[4693]: I1212 16:15:47.357978 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:15:47 crc kubenswrapper[4693]: E1212 16:15:47.359154 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:15:49 crc kubenswrapper[4693]: I1212 16:15:49.668886 4693 generic.go:334] "Generic (PLEG): container finished" podID="21e1cc20-3fe3-4674-97a4-0489caa4588e" containerID="cfb2ec80fd83004328cdf174da11f06eb57d3839a8b3d4b0563de3d5f4204387" exitCode=0 Dec 12 16:15:49 crc kubenswrapper[4693]: I1212 16:15:49.669516 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nrzkw" event={"ID":"21e1cc20-3fe3-4674-97a4-0489caa4588e","Type":"ContainerDied","Data":"cfb2ec80fd83004328cdf174da11f06eb57d3839a8b3d4b0563de3d5f4204387"} Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.159327 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nrzkw" Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.232396 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e1cc20-3fe3-4674-97a4-0489caa4588e-config-data\") pod \"21e1cc20-3fe3-4674-97a4-0489caa4588e\" (UID: \"21e1cc20-3fe3-4674-97a4-0489caa4588e\") " Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.232435 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21e1cc20-3fe3-4674-97a4-0489caa4588e-scripts\") pod \"21e1cc20-3fe3-4674-97a4-0489caa4588e\" (UID: \"21e1cc20-3fe3-4674-97a4-0489caa4588e\") " Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.232493 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rct5n\" (UniqueName: \"kubernetes.io/projected/21e1cc20-3fe3-4674-97a4-0489caa4588e-kube-api-access-rct5n\") pod \"21e1cc20-3fe3-4674-97a4-0489caa4588e\" (UID: \"21e1cc20-3fe3-4674-97a4-0489caa4588e\") " Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.232604 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e1cc20-3fe3-4674-97a4-0489caa4588e-combined-ca-bundle\") pod \"21e1cc20-3fe3-4674-97a4-0489caa4588e\" (UID: \"21e1cc20-3fe3-4674-97a4-0489caa4588e\") " Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.239866 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21e1cc20-3fe3-4674-97a4-0489caa4588e-kube-api-access-rct5n" (OuterVolumeSpecName: "kube-api-access-rct5n") pod "21e1cc20-3fe3-4674-97a4-0489caa4588e" (UID: "21e1cc20-3fe3-4674-97a4-0489caa4588e"). InnerVolumeSpecName "kube-api-access-rct5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.240565 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e1cc20-3fe3-4674-97a4-0489caa4588e-scripts" (OuterVolumeSpecName: "scripts") pod "21e1cc20-3fe3-4674-97a4-0489caa4588e" (UID: "21e1cc20-3fe3-4674-97a4-0489caa4588e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.275011 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e1cc20-3fe3-4674-97a4-0489caa4588e-config-data" (OuterVolumeSpecName: "config-data") pod "21e1cc20-3fe3-4674-97a4-0489caa4588e" (UID: "21e1cc20-3fe3-4674-97a4-0489caa4588e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.276710 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e1cc20-3fe3-4674-97a4-0489caa4588e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21e1cc20-3fe3-4674-97a4-0489caa4588e" (UID: "21e1cc20-3fe3-4674-97a4-0489caa4588e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.336588 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e1cc20-3fe3-4674-97a4-0489caa4588e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.336644 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21e1cc20-3fe3-4674-97a4-0489caa4588e-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.336686 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e1cc20-3fe3-4674-97a4-0489caa4588e-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.336712 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rct5n\" (UniqueName: \"kubernetes.io/projected/21e1cc20-3fe3-4674-97a4-0489caa4588e-kube-api-access-rct5n\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.703767 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nrzkw" event={"ID":"21e1cc20-3fe3-4674-97a4-0489caa4588e","Type":"ContainerDied","Data":"2641f1eb2c55c3061ff720a97545048986bec7a2dc2b7307968560f28395b38e"} Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.703828 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2641f1eb2c55c3061ff720a97545048986bec7a2dc2b7307968560f28395b38e" Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.703880 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nrzkw" Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.917642 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.917934 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="528ba800-12cf-4149-82df-559b4ea15ad7" containerName="nova-scheduler-scheduler" containerID="cri-o://ea445a038ae2162ff1978140c8151fdc0e5fbc131c7609ae8bc1a452b2140276" gracePeriod=30 Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.948185 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.948670 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="88524e6a-0097-46c6-b3e4-17cb2b97cd7e" containerName="nova-api-log" containerID="cri-o://9ac440d5435b2ae2681ccb7388118fa04e4382c2980451b041270404e2a333b8" gracePeriod=30 Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.949526 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="88524e6a-0097-46c6-b3e4-17cb2b97cd7e" containerName="nova-api-api" containerID="cri-o://217f5de13971b73eb0fa42399014544ba1a7e94ec8520792bc485a61c0253a4f" gracePeriod=30 Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.963642 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.963897 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="459816af-8a53-489f-9a69-fe427a9e2ef3" containerName="nova-metadata-log" containerID="cri-o://08f8ce06ec9be2d2f8155075eab22a425745403a4ef1938207d0677e7eab0302" gracePeriod=30 Dec 12 16:15:51 crc kubenswrapper[4693]: I1212 16:15:51.964046 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="459816af-8a53-489f-9a69-fe427a9e2ef3" containerName="nova-metadata-metadata" containerID="cri-o://5f7d99afc43c6b9ae94d921b4a43b50f294b39f6b865c8e6e215eddb48eefa09" gracePeriod=30 Dec 12 16:15:52 crc kubenswrapper[4693]: I1212 16:15:52.717311 4693 generic.go:334] "Generic (PLEG): container finished" podID="88524e6a-0097-46c6-b3e4-17cb2b97cd7e" containerID="9ac440d5435b2ae2681ccb7388118fa04e4382c2980451b041270404e2a333b8" exitCode=143 Dec 12 16:15:52 crc kubenswrapper[4693]: I1212 16:15:52.717394 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"88524e6a-0097-46c6-b3e4-17cb2b97cd7e","Type":"ContainerDied","Data":"9ac440d5435b2ae2681ccb7388118fa04e4382c2980451b041270404e2a333b8"} Dec 12 16:15:52 crc kubenswrapper[4693]: I1212 16:15:52.720056 4693 generic.go:334] "Generic (PLEG): container finished" podID="459816af-8a53-489f-9a69-fe427a9e2ef3" containerID="08f8ce06ec9be2d2f8155075eab22a425745403a4ef1938207d0677e7eab0302" exitCode=143 Dec 12 16:15:52 crc kubenswrapper[4693]: I1212 16:15:52.720085 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"459816af-8a53-489f-9a69-fe427a9e2ef3","Type":"ContainerDied","Data":"08f8ce06ec9be2d2f8155075eab22a425745403a4ef1938207d0677e7eab0302"} Dec 12 16:15:54 crc kubenswrapper[4693]: E1212 16:15:54.891764 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ea445a038ae2162ff1978140c8151fdc0e5fbc131c7609ae8bc1a452b2140276" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 12 16:15:54 crc kubenswrapper[4693]: E1212 16:15:54.894579 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ea445a038ae2162ff1978140c8151fdc0e5fbc131c7609ae8bc1a452b2140276" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 12 16:15:54 crc kubenswrapper[4693]: E1212 16:15:54.895876 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ea445a038ae2162ff1978140c8151fdc0e5fbc131c7609ae8bc1a452b2140276" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 12 16:15:54 crc kubenswrapper[4693]: E1212 16:15:54.895951 4693 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="528ba800-12cf-4149-82df-559b4ea15ad7" containerName="nova-scheduler-scheduler" Dec 12 16:15:55 crc kubenswrapper[4693]: I1212 16:15:55.305257 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="459816af-8a53-489f-9a69-fe427a9e2ef3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.246:8775/\": read tcp 10.217.0.2:43324->10.217.0.246:8775: read: connection reset by peer" Dec 12 16:15:55 crc kubenswrapper[4693]: I1212 16:15:55.305296 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="459816af-8a53-489f-9a69-fe427a9e2ef3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.246:8775/\": read tcp 10.217.0.2:43320->10.217.0.246:8775: read: connection reset by peer" Dec 12 16:15:55 crc kubenswrapper[4693]: I1212 16:15:55.788178 4693 generic.go:334] "Generic (PLEG): container finished" podID="88524e6a-0097-46c6-b3e4-17cb2b97cd7e" containerID="217f5de13971b73eb0fa42399014544ba1a7e94ec8520792bc485a61c0253a4f" exitCode=0 Dec 12 16:15:55 crc kubenswrapper[4693]: I1212 16:15:55.789331 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"88524e6a-0097-46c6-b3e4-17cb2b97cd7e","Type":"ContainerDied","Data":"217f5de13971b73eb0fa42399014544ba1a7e94ec8520792bc485a61c0253a4f"} Dec 12 16:15:55 crc kubenswrapper[4693]: I1212 16:15:55.792370 4693 generic.go:334] "Generic (PLEG): container finished" podID="459816af-8a53-489f-9a69-fe427a9e2ef3" containerID="5f7d99afc43c6b9ae94d921b4a43b50f294b39f6b865c8e6e215eddb48eefa09" exitCode=0 Dec 12 16:15:55 crc kubenswrapper[4693]: I1212 16:15:55.792420 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"459816af-8a53-489f-9a69-fe427a9e2ef3","Type":"ContainerDied","Data":"5f7d99afc43c6b9ae94d921b4a43b50f294b39f6b865c8e6e215eddb48eefa09"} Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.120425 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.126986 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.265623 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/459816af-8a53-489f-9a69-fe427a9e2ef3-logs\") pod \"459816af-8a53-489f-9a69-fe427a9e2ef3\" (UID: \"459816af-8a53-489f-9a69-fe427a9e2ef3\") " Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.265682 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-internal-tls-certs\") pod \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.265756 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt894\" (UniqueName: \"kubernetes.io/projected/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-kube-api-access-gt894\") pod \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.265779 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/459816af-8a53-489f-9a69-fe427a9e2ef3-config-data\") pod \"459816af-8a53-489f-9a69-fe427a9e2ef3\" (UID: \"459816af-8a53-489f-9a69-fe427a9e2ef3\") " Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.265885 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/459816af-8a53-489f-9a69-fe427a9e2ef3-combined-ca-bundle\") pod \"459816af-8a53-489f-9a69-fe427a9e2ef3\" (UID: \"459816af-8a53-489f-9a69-fe427a9e2ef3\") " Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.265913 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-config-data\") pod \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.265993 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-public-tls-certs\") pod \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.266169 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px457\" (UniqueName: \"kubernetes.io/projected/459816af-8a53-489f-9a69-fe427a9e2ef3-kube-api-access-px457\") pod \"459816af-8a53-489f-9a69-fe427a9e2ef3\" (UID: \"459816af-8a53-489f-9a69-fe427a9e2ef3\") " Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.266206 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/459816af-8a53-489f-9a69-fe427a9e2ef3-nova-metadata-tls-certs\") pod \"459816af-8a53-489f-9a69-fe427a9e2ef3\" (UID: \"459816af-8a53-489f-9a69-fe427a9e2ef3\") " Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.266263 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-logs\") pod \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.266298 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-combined-ca-bundle\") pod \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\" (UID: \"88524e6a-0097-46c6-b3e4-17cb2b97cd7e\") " Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.268710 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-logs" (OuterVolumeSpecName: "logs") pod "88524e6a-0097-46c6-b3e4-17cb2b97cd7e" (UID: "88524e6a-0097-46c6-b3e4-17cb2b97cd7e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.269644 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/459816af-8a53-489f-9a69-fe427a9e2ef3-logs" (OuterVolumeSpecName: "logs") pod "459816af-8a53-489f-9a69-fe427a9e2ef3" (UID: "459816af-8a53-489f-9a69-fe427a9e2ef3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.282146 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-kube-api-access-gt894" (OuterVolumeSpecName: "kube-api-access-gt894") pod "88524e6a-0097-46c6-b3e4-17cb2b97cd7e" (UID: "88524e6a-0097-46c6-b3e4-17cb2b97cd7e"). InnerVolumeSpecName "kube-api-access-gt894". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.285325 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/459816af-8a53-489f-9a69-fe427a9e2ef3-kube-api-access-px457" (OuterVolumeSpecName: "kube-api-access-px457") pod "459816af-8a53-489f-9a69-fe427a9e2ef3" (UID: "459816af-8a53-489f-9a69-fe427a9e2ef3"). InnerVolumeSpecName "kube-api-access-px457". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.331195 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/459816af-8a53-489f-9a69-fe427a9e2ef3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "459816af-8a53-489f-9a69-fe427a9e2ef3" (UID: "459816af-8a53-489f-9a69-fe427a9e2ef3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.352958 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88524e6a-0097-46c6-b3e4-17cb2b97cd7e" (UID: "88524e6a-0097-46c6-b3e4-17cb2b97cd7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.359895 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.368754 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/459816af-8a53-489f-9a69-fe427a9e2ef3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.368991 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px457\" (UniqueName: \"kubernetes.io/projected/459816af-8a53-489f-9a69-fe427a9e2ef3-kube-api-access-px457\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.369050 4693 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-logs\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.369111 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.369166 4693 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/459816af-8a53-489f-9a69-fe427a9e2ef3-logs\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.369216 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt894\" (UniqueName: \"kubernetes.io/projected/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-kube-api-access-gt894\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.368923 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-config-data" (OuterVolumeSpecName: "config-data") pod "88524e6a-0097-46c6-b3e4-17cb2b97cd7e" (UID: "88524e6a-0097-46c6-b3e4-17cb2b97cd7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.370762 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/459816af-8a53-489f-9a69-fe427a9e2ef3-config-data" (OuterVolumeSpecName: "config-data") pod "459816af-8a53-489f-9a69-fe427a9e2ef3" (UID: "459816af-8a53-489f-9a69-fe427a9e2ef3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.408952 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "88524e6a-0097-46c6-b3e4-17cb2b97cd7e" (UID: "88524e6a-0097-46c6-b3e4-17cb2b97cd7e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.415550 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "88524e6a-0097-46c6-b3e4-17cb2b97cd7e" (UID: "88524e6a-0097-46c6-b3e4-17cb2b97cd7e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.464830 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/459816af-8a53-489f-9a69-fe427a9e2ef3-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "459816af-8a53-489f-9a69-fe427a9e2ef3" (UID: "459816af-8a53-489f-9a69-fe427a9e2ef3"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.470942 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69wd4\" (UniqueName: \"kubernetes.io/projected/528ba800-12cf-4149-82df-559b4ea15ad7-kube-api-access-69wd4\") pod \"528ba800-12cf-4149-82df-559b4ea15ad7\" (UID: \"528ba800-12cf-4149-82df-559b4ea15ad7\") " Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.471153 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/528ba800-12cf-4149-82df-559b4ea15ad7-combined-ca-bundle\") pod \"528ba800-12cf-4149-82df-559b4ea15ad7\" (UID: \"528ba800-12cf-4149-82df-559b4ea15ad7\") " Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.471669 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/528ba800-12cf-4149-82df-559b4ea15ad7-config-data\") pod \"528ba800-12cf-4149-82df-559b4ea15ad7\" (UID: \"528ba800-12cf-4149-82df-559b4ea15ad7\") " Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.477813 4693 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/459816af-8a53-489f-9a69-fe427a9e2ef3-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.477859 4693 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.477871 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/459816af-8a53-489f-9a69-fe427a9e2ef3-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.477881 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.477890 4693 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88524e6a-0097-46c6-b3e4-17cb2b97cd7e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.479483 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/528ba800-12cf-4149-82df-559b4ea15ad7-kube-api-access-69wd4" (OuterVolumeSpecName: "kube-api-access-69wd4") pod "528ba800-12cf-4149-82df-559b4ea15ad7" (UID: "528ba800-12cf-4149-82df-559b4ea15ad7"). InnerVolumeSpecName "kube-api-access-69wd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.514886 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/528ba800-12cf-4149-82df-559b4ea15ad7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "528ba800-12cf-4149-82df-559b4ea15ad7" (UID: "528ba800-12cf-4149-82df-559b4ea15ad7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.544194 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/528ba800-12cf-4149-82df-559b4ea15ad7-config-data" (OuterVolumeSpecName: "config-data") pod "528ba800-12cf-4149-82df-559b4ea15ad7" (UID: "528ba800-12cf-4149-82df-559b4ea15ad7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.580943 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69wd4\" (UniqueName: \"kubernetes.io/projected/528ba800-12cf-4149-82df-559b4ea15ad7-kube-api-access-69wd4\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.580992 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/528ba800-12cf-4149-82df-559b4ea15ad7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.581004 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/528ba800-12cf-4149-82df-559b4ea15ad7-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.805059 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"88524e6a-0097-46c6-b3e4-17cb2b97cd7e","Type":"ContainerDied","Data":"5aedf6b16fda5d63cb2bc24df144c0891fe3c275d8f8b6c788947b22ce209b00"} Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.805119 4693 scope.go:117] "RemoveContainer" containerID="217f5de13971b73eb0fa42399014544ba1a7e94ec8520792bc485a61c0253a4f" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.805389 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.808536 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"459816af-8a53-489f-9a69-fe427a9e2ef3","Type":"ContainerDied","Data":"62595d8701f3d2693290725f1154aa8c74c8031944e5c7b00fb9a2c1202a63f6"} Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.808654 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.813496 4693 generic.go:334] "Generic (PLEG): container finished" podID="528ba800-12cf-4149-82df-559b4ea15ad7" containerID="ea445a038ae2162ff1978140c8151fdc0e5fbc131c7609ae8bc1a452b2140276" exitCode=0 Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.813550 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"528ba800-12cf-4149-82df-559b4ea15ad7","Type":"ContainerDied","Data":"ea445a038ae2162ff1978140c8151fdc0e5fbc131c7609ae8bc1a452b2140276"} Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.813557 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.813582 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"528ba800-12cf-4149-82df-559b4ea15ad7","Type":"ContainerDied","Data":"6e6e7f7678b92379c28be0cc67472a09f4af494486894f6f6e7d083af40568b5"} Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.844836 4693 scope.go:117] "RemoveContainer" containerID="9ac440d5435b2ae2681ccb7388118fa04e4382c2980451b041270404e2a333b8" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.861467 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.879998 4693 scope.go:117] "RemoveContainer" containerID="5f7d99afc43c6b9ae94d921b4a43b50f294b39f6b865c8e6e215eddb48eefa09" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.901957 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.915359 4693 scope.go:117] "RemoveContainer" containerID="08f8ce06ec9be2d2f8155075eab22a425745403a4ef1938207d0677e7eab0302" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.933661 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.958644 4693 scope.go:117] "RemoveContainer" containerID="ea445a038ae2162ff1978140c8151fdc0e5fbc131c7609ae8bc1a452b2140276" Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.958894 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 16:15:56 crc kubenswrapper[4693]: I1212 16:15:56.996347 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 12 16:15:57 crc kubenswrapper[4693]: E1212 16:15:57.004762 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e1cc20-3fe3-4674-97a4-0489caa4588e" containerName="nova-manage" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.004804 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e1cc20-3fe3-4674-97a4-0489caa4588e" containerName="nova-manage" Dec 12 16:15:57 crc kubenswrapper[4693]: E1212 16:15:57.004859 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88524e6a-0097-46c6-b3e4-17cb2b97cd7e" containerName="nova-api-api" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.004868 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="88524e6a-0097-46c6-b3e4-17cb2b97cd7e" containerName="nova-api-api" Dec 12 16:15:57 crc kubenswrapper[4693]: E1212 16:15:57.004920 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="459816af-8a53-489f-9a69-fe427a9e2ef3" containerName="nova-metadata-metadata" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.004929 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="459816af-8a53-489f-9a69-fe427a9e2ef3" containerName="nova-metadata-metadata" Dec 12 16:15:57 crc kubenswrapper[4693]: E1212 16:15:57.005039 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88524e6a-0097-46c6-b3e4-17cb2b97cd7e" containerName="nova-api-log" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.005066 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="88524e6a-0097-46c6-b3e4-17cb2b97cd7e" containerName="nova-api-log" Dec 12 16:15:57 crc kubenswrapper[4693]: E1212 16:15:57.005095 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="528ba800-12cf-4149-82df-559b4ea15ad7" containerName="nova-scheduler-scheduler" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.005103 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="528ba800-12cf-4149-82df-559b4ea15ad7" containerName="nova-scheduler-scheduler" Dec 12 16:15:57 crc kubenswrapper[4693]: E1212 16:15:57.005176 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="459816af-8a53-489f-9a69-fe427a9e2ef3" containerName="nova-metadata-log" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.005186 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="459816af-8a53-489f-9a69-fe427a9e2ef3" containerName="nova-metadata-log" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.005963 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e1cc20-3fe3-4674-97a4-0489caa4588e" containerName="nova-manage" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.006018 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="88524e6a-0097-46c6-b3e4-17cb2b97cd7e" containerName="nova-api-api" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.006039 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="88524e6a-0097-46c6-b3e4-17cb2b97cd7e" containerName="nova-api-log" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.007282 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="459816af-8a53-489f-9a69-fe427a9e2ef3" containerName="nova-metadata-metadata" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.007312 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="528ba800-12cf-4149-82df-559b4ea15ad7" containerName="nova-scheduler-scheduler" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.007337 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="459816af-8a53-489f-9a69-fe427a9e2ef3" containerName="nova-metadata-log" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.019408 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.022757 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.025103 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.025579 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.026006 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.042394 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.045309 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.047236 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.052321 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.066693 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.079620 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.091720 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.094553 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.096729 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.096980 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.110792 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.116926 4693 scope.go:117] "RemoveContainer" containerID="ea445a038ae2162ff1978140c8151fdc0e5fbc131c7609ae8bc1a452b2140276" Dec 12 16:15:57 crc kubenswrapper[4693]: E1212 16:15:57.118580 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea445a038ae2162ff1978140c8151fdc0e5fbc131c7609ae8bc1a452b2140276\": container with ID starting with ea445a038ae2162ff1978140c8151fdc0e5fbc131c7609ae8bc1a452b2140276 not found: ID does not exist" containerID="ea445a038ae2162ff1978140c8151fdc0e5fbc131c7609ae8bc1a452b2140276" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.118627 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea445a038ae2162ff1978140c8151fdc0e5fbc131c7609ae8bc1a452b2140276"} err="failed to get container status \"ea445a038ae2162ff1978140c8151fdc0e5fbc131c7609ae8bc1a452b2140276\": rpc error: code = NotFound desc = could not find container \"ea445a038ae2162ff1978140c8151fdc0e5fbc131c7609ae8bc1a452b2140276\": container with ID starting with ea445a038ae2162ff1978140c8151fdc0e5fbc131c7609ae8bc1a452b2140276 not found: ID does not exist" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.208864 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06c297cc-3159-4a2c-a570-3cf346e9d1a6-public-tls-certs\") pod \"nova-api-0\" (UID: \"06c297cc-3159-4a2c-a570-3cf346e9d1a6\") " pod="openstack/nova-api-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.208929 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpslq\" (UniqueName: \"kubernetes.io/projected/ed9d54d8-7844-4bd3-bfec-7335b3c68167-kube-api-access-cpslq\") pod \"nova-scheduler-0\" (UID: \"ed9d54d8-7844-4bd3-bfec-7335b3c68167\") " pod="openstack/nova-scheduler-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.209096 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f539c84-dbec-48db-9cd8-4c77cad8a9ea-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f539c84-dbec-48db-9cd8-4c77cad8a9ea\") " pod="openstack/nova-metadata-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.209232 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed9d54d8-7844-4bd3-bfec-7335b3c68167-config-data\") pod \"nova-scheduler-0\" (UID: \"ed9d54d8-7844-4bd3-bfec-7335b3c68167\") " pod="openstack/nova-scheduler-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.209512 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f539c84-dbec-48db-9cd8-4c77cad8a9ea-config-data\") pod \"nova-metadata-0\" (UID: \"8f539c84-dbec-48db-9cd8-4c77cad8a9ea\") " pod="openstack/nova-metadata-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.209561 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c297cc-3159-4a2c-a570-3cf346e9d1a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"06c297cc-3159-4a2c-a570-3cf346e9d1a6\") " pod="openstack/nova-api-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.209585 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f539c84-dbec-48db-9cd8-4c77cad8a9ea-logs\") pod \"nova-metadata-0\" (UID: \"8f539c84-dbec-48db-9cd8-4c77cad8a9ea\") " pod="openstack/nova-metadata-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.209620 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfhvx\" (UniqueName: \"kubernetes.io/projected/8f539c84-dbec-48db-9cd8-4c77cad8a9ea-kube-api-access-bfhvx\") pod \"nova-metadata-0\" (UID: \"8f539c84-dbec-48db-9cd8-4c77cad8a9ea\") " pod="openstack/nova-metadata-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.209645 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f539c84-dbec-48db-9cd8-4c77cad8a9ea-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8f539c84-dbec-48db-9cd8-4c77cad8a9ea\") " pod="openstack/nova-metadata-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.209668 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed9d54d8-7844-4bd3-bfec-7335b3c68167-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ed9d54d8-7844-4bd3-bfec-7335b3c68167\") " pod="openstack/nova-scheduler-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.209728 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmjfg\" (UniqueName: \"kubernetes.io/projected/06c297cc-3159-4a2c-a570-3cf346e9d1a6-kube-api-access-lmjfg\") pod \"nova-api-0\" (UID: \"06c297cc-3159-4a2c-a570-3cf346e9d1a6\") " pod="openstack/nova-api-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.209848 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06c297cc-3159-4a2c-a570-3cf346e9d1a6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"06c297cc-3159-4a2c-a570-3cf346e9d1a6\") " pod="openstack/nova-api-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.210032 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06c297cc-3159-4a2c-a570-3cf346e9d1a6-logs\") pod \"nova-api-0\" (UID: \"06c297cc-3159-4a2c-a570-3cf346e9d1a6\") " pod="openstack/nova-api-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.210058 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06c297cc-3159-4a2c-a570-3cf346e9d1a6-config-data\") pod \"nova-api-0\" (UID: \"06c297cc-3159-4a2c-a570-3cf346e9d1a6\") " pod="openstack/nova-api-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.313838 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f539c84-dbec-48db-9cd8-4c77cad8a9ea-config-data\") pod \"nova-metadata-0\" (UID: \"8f539c84-dbec-48db-9cd8-4c77cad8a9ea\") " pod="openstack/nova-metadata-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.313929 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c297cc-3159-4a2c-a570-3cf346e9d1a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"06c297cc-3159-4a2c-a570-3cf346e9d1a6\") " pod="openstack/nova-api-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.313958 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f539c84-dbec-48db-9cd8-4c77cad8a9ea-logs\") pod \"nova-metadata-0\" (UID: \"8f539c84-dbec-48db-9cd8-4c77cad8a9ea\") " pod="openstack/nova-metadata-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.313996 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfhvx\" (UniqueName: \"kubernetes.io/projected/8f539c84-dbec-48db-9cd8-4c77cad8a9ea-kube-api-access-bfhvx\") pod \"nova-metadata-0\" (UID: \"8f539c84-dbec-48db-9cd8-4c77cad8a9ea\") " pod="openstack/nova-metadata-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.314025 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f539c84-dbec-48db-9cd8-4c77cad8a9ea-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8f539c84-dbec-48db-9cd8-4c77cad8a9ea\") " pod="openstack/nova-metadata-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.314054 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed9d54d8-7844-4bd3-bfec-7335b3c68167-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ed9d54d8-7844-4bd3-bfec-7335b3c68167\") " pod="openstack/nova-scheduler-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.314089 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmjfg\" (UniqueName: \"kubernetes.io/projected/06c297cc-3159-4a2c-a570-3cf346e9d1a6-kube-api-access-lmjfg\") pod \"nova-api-0\" (UID: \"06c297cc-3159-4a2c-a570-3cf346e9d1a6\") " pod="openstack/nova-api-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.314189 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06c297cc-3159-4a2c-a570-3cf346e9d1a6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"06c297cc-3159-4a2c-a570-3cf346e9d1a6\") " pod="openstack/nova-api-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.314403 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06c297cc-3159-4a2c-a570-3cf346e9d1a6-logs\") pod \"nova-api-0\" (UID: \"06c297cc-3159-4a2c-a570-3cf346e9d1a6\") " pod="openstack/nova-api-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.314430 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06c297cc-3159-4a2c-a570-3cf346e9d1a6-config-data\") pod \"nova-api-0\" (UID: \"06c297cc-3159-4a2c-a570-3cf346e9d1a6\") " pod="openstack/nova-api-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.314472 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06c297cc-3159-4a2c-a570-3cf346e9d1a6-public-tls-certs\") pod \"nova-api-0\" (UID: \"06c297cc-3159-4a2c-a570-3cf346e9d1a6\") " pod="openstack/nova-api-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.314500 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpslq\" (UniqueName: \"kubernetes.io/projected/ed9d54d8-7844-4bd3-bfec-7335b3c68167-kube-api-access-cpslq\") pod \"nova-scheduler-0\" (UID: \"ed9d54d8-7844-4bd3-bfec-7335b3c68167\") " pod="openstack/nova-scheduler-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.314606 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f539c84-dbec-48db-9cd8-4c77cad8a9ea-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f539c84-dbec-48db-9cd8-4c77cad8a9ea\") " pod="openstack/nova-metadata-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.314644 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed9d54d8-7844-4bd3-bfec-7335b3c68167-config-data\") pod \"nova-scheduler-0\" (UID: \"ed9d54d8-7844-4bd3-bfec-7335b3c68167\") " pod="openstack/nova-scheduler-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.314792 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06c297cc-3159-4a2c-a570-3cf346e9d1a6-logs\") pod \"nova-api-0\" (UID: \"06c297cc-3159-4a2c-a570-3cf346e9d1a6\") " pod="openstack/nova-api-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.315320 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f539c84-dbec-48db-9cd8-4c77cad8a9ea-logs\") pod \"nova-metadata-0\" (UID: \"8f539c84-dbec-48db-9cd8-4c77cad8a9ea\") " pod="openstack/nova-metadata-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.319870 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06c297cc-3159-4a2c-a570-3cf346e9d1a6-public-tls-certs\") pod \"nova-api-0\" (UID: \"06c297cc-3159-4a2c-a570-3cf346e9d1a6\") " pod="openstack/nova-api-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.320459 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed9d54d8-7844-4bd3-bfec-7335b3c68167-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ed9d54d8-7844-4bd3-bfec-7335b3c68167\") " pod="openstack/nova-scheduler-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.320711 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f539c84-dbec-48db-9cd8-4c77cad8a9ea-config-data\") pod \"nova-metadata-0\" (UID: \"8f539c84-dbec-48db-9cd8-4c77cad8a9ea\") " pod="openstack/nova-metadata-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.321467 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f539c84-dbec-48db-9cd8-4c77cad8a9ea-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8f539c84-dbec-48db-9cd8-4c77cad8a9ea\") " pod="openstack/nova-metadata-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.321562 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06c297cc-3159-4a2c-a570-3cf346e9d1a6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"06c297cc-3159-4a2c-a570-3cf346e9d1a6\") " pod="openstack/nova-api-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.321707 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c297cc-3159-4a2c-a570-3cf346e9d1a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"06c297cc-3159-4a2c-a570-3cf346e9d1a6\") " pod="openstack/nova-api-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.323141 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed9d54d8-7844-4bd3-bfec-7335b3c68167-config-data\") pod \"nova-scheduler-0\" (UID: \"ed9d54d8-7844-4bd3-bfec-7335b3c68167\") " pod="openstack/nova-scheduler-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.323301 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f539c84-dbec-48db-9cd8-4c77cad8a9ea-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f539c84-dbec-48db-9cd8-4c77cad8a9ea\") " pod="openstack/nova-metadata-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.326253 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06c297cc-3159-4a2c-a570-3cf346e9d1a6-config-data\") pod \"nova-api-0\" (UID: \"06c297cc-3159-4a2c-a570-3cf346e9d1a6\") " pod="openstack/nova-api-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.337543 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmjfg\" (UniqueName: \"kubernetes.io/projected/06c297cc-3159-4a2c-a570-3cf346e9d1a6-kube-api-access-lmjfg\") pod \"nova-api-0\" (UID: \"06c297cc-3159-4a2c-a570-3cf346e9d1a6\") " pod="openstack/nova-api-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.340656 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfhvx\" (UniqueName: \"kubernetes.io/projected/8f539c84-dbec-48db-9cd8-4c77cad8a9ea-kube-api-access-bfhvx\") pod \"nova-metadata-0\" (UID: \"8f539c84-dbec-48db-9cd8-4c77cad8a9ea\") " pod="openstack/nova-metadata-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.342263 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpslq\" (UniqueName: \"kubernetes.io/projected/ed9d54d8-7844-4bd3-bfec-7335b3c68167-kube-api-access-cpslq\") pod \"nova-scheduler-0\" (UID: \"ed9d54d8-7844-4bd3-bfec-7335b3c68167\") " pod="openstack/nova-scheduler-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.369422 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="459816af-8a53-489f-9a69-fe427a9e2ef3" path="/var/lib/kubelet/pods/459816af-8a53-489f-9a69-fe427a9e2ef3/volumes" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.370154 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="528ba800-12cf-4149-82df-559b4ea15ad7" path="/var/lib/kubelet/pods/528ba800-12cf-4149-82df-559b4ea15ad7/volumes" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.370705 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88524e6a-0097-46c6-b3e4-17cb2b97cd7e" path="/var/lib/kubelet/pods/88524e6a-0097-46c6-b3e4-17cb2b97cd7e/volumes" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.402870 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.411468 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.428987 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.453755 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.519102 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db313a41-7be9-4215-846e-d480bbdb0186-config-data\") pod \"db313a41-7be9-4215-846e-d480bbdb0186\" (UID: \"db313a41-7be9-4215-846e-d480bbdb0186\") " Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.519718 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db313a41-7be9-4215-846e-d480bbdb0186-combined-ca-bundle\") pod \"db313a41-7be9-4215-846e-d480bbdb0186\" (UID: \"db313a41-7be9-4215-846e-d480bbdb0186\") " Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.519807 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db313a41-7be9-4215-846e-d480bbdb0186-scripts\") pod \"db313a41-7be9-4215-846e-d480bbdb0186\" (UID: \"db313a41-7be9-4215-846e-d480bbdb0186\") " Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.519972 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv5g2\" (UniqueName: \"kubernetes.io/projected/db313a41-7be9-4215-846e-d480bbdb0186-kube-api-access-gv5g2\") pod \"db313a41-7be9-4215-846e-d480bbdb0186\" (UID: \"db313a41-7be9-4215-846e-d480bbdb0186\") " Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.527708 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db313a41-7be9-4215-846e-d480bbdb0186-scripts" (OuterVolumeSpecName: "scripts") pod "db313a41-7be9-4215-846e-d480bbdb0186" (UID: "db313a41-7be9-4215-846e-d480bbdb0186"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.534004 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db313a41-7be9-4215-846e-d480bbdb0186-kube-api-access-gv5g2" (OuterVolumeSpecName: "kube-api-access-gv5g2") pod "db313a41-7be9-4215-846e-d480bbdb0186" (UID: "db313a41-7be9-4215-846e-d480bbdb0186"). InnerVolumeSpecName "kube-api-access-gv5g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.623444 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db313a41-7be9-4215-846e-d480bbdb0186-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.623482 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv5g2\" (UniqueName: \"kubernetes.io/projected/db313a41-7be9-4215-846e-d480bbdb0186-kube-api-access-gv5g2\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.717403 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db313a41-7be9-4215-846e-d480bbdb0186-config-data" (OuterVolumeSpecName: "config-data") pod "db313a41-7be9-4215-846e-d480bbdb0186" (UID: "db313a41-7be9-4215-846e-d480bbdb0186"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.729058 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db313a41-7be9-4215-846e-d480bbdb0186-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.806599 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db313a41-7be9-4215-846e-d480bbdb0186-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db313a41-7be9-4215-846e-d480bbdb0186" (UID: "db313a41-7be9-4215-846e-d480bbdb0186"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.833477 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db313a41-7be9-4215-846e-d480bbdb0186-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.895544 4693 generic.go:334] "Generic (PLEG): container finished" podID="db313a41-7be9-4215-846e-d480bbdb0186" containerID="6a35e6abd6d171c735a1a0556783693be5c13e6c8e4092e15f7fe988d2094422" exitCode=137 Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.895629 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db313a41-7be9-4215-846e-d480bbdb0186","Type":"ContainerDied","Data":"6a35e6abd6d171c735a1a0556783693be5c13e6c8e4092e15f7fe988d2094422"} Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.895660 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db313a41-7be9-4215-846e-d480bbdb0186","Type":"ContainerDied","Data":"769fe7cf556a09922857e02a0a409bb3d902a5a60d2e27ac6b26a9dfafb044a7"} Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.895675 4693 scope.go:117] "RemoveContainer" containerID="6a35e6abd6d171c735a1a0556783693be5c13e6c8e4092e15f7fe988d2094422" Dec 12 16:15:57 crc kubenswrapper[4693]: I1212 16:15:57.895845 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.010697 4693 scope.go:117] "RemoveContainer" containerID="d7627a800a7e1cfd9479987d744324bb688660b154450196271b45693c115675" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.010900 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.050334 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.086339 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Dec 12 16:15:58 crc kubenswrapper[4693]: E1212 16:15:58.086940 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db313a41-7be9-4215-846e-d480bbdb0186" containerName="aodh-notifier" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.086953 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="db313a41-7be9-4215-846e-d480bbdb0186" containerName="aodh-notifier" Dec 12 16:15:58 crc kubenswrapper[4693]: E1212 16:15:58.086971 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db313a41-7be9-4215-846e-d480bbdb0186" containerName="aodh-api" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.086978 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="db313a41-7be9-4215-846e-d480bbdb0186" containerName="aodh-api" Dec 12 16:15:58 crc kubenswrapper[4693]: E1212 16:15:58.087024 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db313a41-7be9-4215-846e-d480bbdb0186" containerName="aodh-listener" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.087030 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="db313a41-7be9-4215-846e-d480bbdb0186" containerName="aodh-listener" Dec 12 16:15:58 crc kubenswrapper[4693]: E1212 16:15:58.087047 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db313a41-7be9-4215-846e-d480bbdb0186" containerName="aodh-evaluator" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.087054 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="db313a41-7be9-4215-846e-d480bbdb0186" containerName="aodh-evaluator" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.087320 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="db313a41-7be9-4215-846e-d480bbdb0186" containerName="aodh-api" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.087330 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="db313a41-7be9-4215-846e-d480bbdb0186" containerName="aodh-evaluator" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.087343 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="db313a41-7be9-4215-846e-d480bbdb0186" containerName="aodh-notifier" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.087351 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="db313a41-7be9-4215-846e-d480bbdb0186" containerName="aodh-listener" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.089653 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.100690 4693 scope.go:117] "RemoveContainer" containerID="05afa3bbd667438b1f5076c4a88c28988d55f0568e9e656a48b9e0dbb1d58641" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.101188 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.101444 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-jzr72" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.101549 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.101645 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.101780 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.104138 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.156335 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.203930 4693 scope.go:117] "RemoveContainer" containerID="fbf0bd6a4f399fc4544f70f70af6645033e67a4a5fbf5c9be0fbb9243a343c89" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.246076 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " pod="openstack/aodh-0" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.246553 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-297hh\" (UniqueName: \"kubernetes.io/projected/c6774670-3fa6-45d1-9174-72ceb917785b-kube-api-access-297hh\") pod \"aodh-0\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " pod="openstack/aodh-0" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.246600 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-scripts\") pod \"aodh-0\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " pod="openstack/aodh-0" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.246670 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-config-data\") pod \"aodh-0\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " pod="openstack/aodh-0" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.246730 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-internal-tls-certs\") pod \"aodh-0\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " pod="openstack/aodh-0" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.246765 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-public-tls-certs\") pod \"aodh-0\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " pod="openstack/aodh-0" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.251925 4693 scope.go:117] "RemoveContainer" containerID="6a35e6abd6d171c735a1a0556783693be5c13e6c8e4092e15f7fe988d2094422" Dec 12 16:15:58 crc kubenswrapper[4693]: E1212 16:15:58.255296 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a35e6abd6d171c735a1a0556783693be5c13e6c8e4092e15f7fe988d2094422\": container with ID starting with 6a35e6abd6d171c735a1a0556783693be5c13e6c8e4092e15f7fe988d2094422 not found: ID does not exist" containerID="6a35e6abd6d171c735a1a0556783693be5c13e6c8e4092e15f7fe988d2094422" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.255341 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a35e6abd6d171c735a1a0556783693be5c13e6c8e4092e15f7fe988d2094422"} err="failed to get container status \"6a35e6abd6d171c735a1a0556783693be5c13e6c8e4092e15f7fe988d2094422\": rpc error: code = NotFound desc = could not find container \"6a35e6abd6d171c735a1a0556783693be5c13e6c8e4092e15f7fe988d2094422\": container with ID starting with 6a35e6abd6d171c735a1a0556783693be5c13e6c8e4092e15f7fe988d2094422 not found: ID does not exist" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.255368 4693 scope.go:117] "RemoveContainer" containerID="d7627a800a7e1cfd9479987d744324bb688660b154450196271b45693c115675" Dec 12 16:15:58 crc kubenswrapper[4693]: E1212 16:15:58.259147 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7627a800a7e1cfd9479987d744324bb688660b154450196271b45693c115675\": container with ID starting with d7627a800a7e1cfd9479987d744324bb688660b154450196271b45693c115675 not found: ID does not exist" containerID="d7627a800a7e1cfd9479987d744324bb688660b154450196271b45693c115675" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.259185 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7627a800a7e1cfd9479987d744324bb688660b154450196271b45693c115675"} err="failed to get container status \"d7627a800a7e1cfd9479987d744324bb688660b154450196271b45693c115675\": rpc error: code = NotFound desc = could not find container \"d7627a800a7e1cfd9479987d744324bb688660b154450196271b45693c115675\": container with ID starting with d7627a800a7e1cfd9479987d744324bb688660b154450196271b45693c115675 not found: ID does not exist" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.259210 4693 scope.go:117] "RemoveContainer" containerID="05afa3bbd667438b1f5076c4a88c28988d55f0568e9e656a48b9e0dbb1d58641" Dec 12 16:15:58 crc kubenswrapper[4693]: E1212 16:15:58.259681 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05afa3bbd667438b1f5076c4a88c28988d55f0568e9e656a48b9e0dbb1d58641\": container with ID starting with 05afa3bbd667438b1f5076c4a88c28988d55f0568e9e656a48b9e0dbb1d58641 not found: ID does not exist" containerID="05afa3bbd667438b1f5076c4a88c28988d55f0568e9e656a48b9e0dbb1d58641" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.259710 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05afa3bbd667438b1f5076c4a88c28988d55f0568e9e656a48b9e0dbb1d58641"} err="failed to get container status \"05afa3bbd667438b1f5076c4a88c28988d55f0568e9e656a48b9e0dbb1d58641\": rpc error: code = NotFound desc = could not find container \"05afa3bbd667438b1f5076c4a88c28988d55f0568e9e656a48b9e0dbb1d58641\": container with ID starting with 05afa3bbd667438b1f5076c4a88c28988d55f0568e9e656a48b9e0dbb1d58641 not found: ID does not exist" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.259722 4693 scope.go:117] "RemoveContainer" containerID="fbf0bd6a4f399fc4544f70f70af6645033e67a4a5fbf5c9be0fbb9243a343c89" Dec 12 16:15:58 crc kubenswrapper[4693]: E1212 16:15:58.260077 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbf0bd6a4f399fc4544f70f70af6645033e67a4a5fbf5c9be0fbb9243a343c89\": container with ID starting with fbf0bd6a4f399fc4544f70f70af6645033e67a4a5fbf5c9be0fbb9243a343c89 not found: ID does not exist" containerID="fbf0bd6a4f399fc4544f70f70af6645033e67a4a5fbf5c9be0fbb9243a343c89" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.260099 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbf0bd6a4f399fc4544f70f70af6645033e67a4a5fbf5c9be0fbb9243a343c89"} err="failed to get container status \"fbf0bd6a4f399fc4544f70f70af6645033e67a4a5fbf5c9be0fbb9243a343c89\": rpc error: code = NotFound desc = could not find container \"fbf0bd6a4f399fc4544f70f70af6645033e67a4a5fbf5c9be0fbb9243a343c89\": container with ID starting with fbf0bd6a4f399fc4544f70f70af6645033e67a4a5fbf5c9be0fbb9243a343c89 not found: ID does not exist" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.263891 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.310468 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 12 16:15:58 crc kubenswrapper[4693]: W1212 16:15:58.311978 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06c297cc_3159_4a2c_a570_3cf346e9d1a6.slice/crio-1be3241237094c8c369242561d6f8da66b4c26be3d0db8d853ae62698b40250d WatchSource:0}: Error finding container 1be3241237094c8c369242561d6f8da66b4c26be3d0db8d853ae62698b40250d: Status 404 returned error can't find the container with id 1be3241237094c8c369242561d6f8da66b4c26be3d0db8d853ae62698b40250d Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.349405 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " pod="openstack/aodh-0" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.349509 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-297hh\" (UniqueName: \"kubernetes.io/projected/c6774670-3fa6-45d1-9174-72ceb917785b-kube-api-access-297hh\") pod \"aodh-0\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " pod="openstack/aodh-0" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.349564 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-scripts\") pod \"aodh-0\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " pod="openstack/aodh-0" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.349636 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-config-data\") pod \"aodh-0\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " pod="openstack/aodh-0" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.349695 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-internal-tls-certs\") pod \"aodh-0\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " pod="openstack/aodh-0" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.349728 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-public-tls-certs\") pod \"aodh-0\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " pod="openstack/aodh-0" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.355657 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-config-data\") pod \"aodh-0\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " pod="openstack/aodh-0" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.356218 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-public-tls-certs\") pod \"aodh-0\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " pod="openstack/aodh-0" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.356268 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-internal-tls-certs\") pod \"aodh-0\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " pod="openstack/aodh-0" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.357349 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:15:58 crc kubenswrapper[4693]: E1212 16:15:58.357653 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.358898 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " pod="openstack/aodh-0" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.360588 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-scripts\") pod \"aodh-0\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " pod="openstack/aodh-0" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.364505 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-297hh\" (UniqueName: \"kubernetes.io/projected/c6774670-3fa6-45d1-9174-72ceb917785b-kube-api-access-297hh\") pod \"aodh-0\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " pod="openstack/aodh-0" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.483952 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.975596 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ed9d54d8-7844-4bd3-bfec-7335b3c68167","Type":"ContainerStarted","Data":"4fbc74ed93762bfdcf9ac9b9bfba256b7f736ae22d0779af9307a620739a554f"} Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.976690 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ed9d54d8-7844-4bd3-bfec-7335b3c68167","Type":"ContainerStarted","Data":"a539eb94f83192cf2d5c057b3dd19de10b32cacf96982adb0f40a28b339afdd6"} Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.979254 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f539c84-dbec-48db-9cd8-4c77cad8a9ea","Type":"ContainerStarted","Data":"4e86e384d908ce604caf439f23d9a61abe16dab14aec284c04ef8c4974ad6890"} Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.979297 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f539c84-dbec-48db-9cd8-4c77cad8a9ea","Type":"ContainerStarted","Data":"a477f321f1120708e58b8627d36801c10eb26997df0d46506d2ca60513b135ca"} Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.979307 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f539c84-dbec-48db-9cd8-4c77cad8a9ea","Type":"ContainerStarted","Data":"29c02ec9d8fcbca081517a3cf5d24c2f5c6697971afcce8b7cb07fc78a1a88c4"} Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.983582 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"06c297cc-3159-4a2c-a570-3cf346e9d1a6","Type":"ContainerStarted","Data":"4cc0a95ed338e339f78b1403682630e60f810dbe42fd7752db2412f82e2f78f8"} Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.983647 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"06c297cc-3159-4a2c-a570-3cf346e9d1a6","Type":"ContainerStarted","Data":"0e8155c61173ce65869369170e22ea1cb588bcc5962c0695bd3fa1a98952229b"} Dec 12 16:15:58 crc kubenswrapper[4693]: I1212 16:15:58.983657 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"06c297cc-3159-4a2c-a570-3cf346e9d1a6","Type":"ContainerStarted","Data":"1be3241237094c8c369242561d6f8da66b4c26be3d0db8d853ae62698b40250d"} Dec 12 16:15:59 crc kubenswrapper[4693]: I1212 16:15:59.011323 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.011255983 podStartE2EDuration="3.011255983s" podCreationTimestamp="2025-12-12 16:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:15:58.995508418 +0000 UTC m=+1786.164148019" watchObservedRunningTime="2025-12-12 16:15:59.011255983 +0000 UTC m=+1786.179895584" Dec 12 16:15:59 crc kubenswrapper[4693]: I1212 16:15:59.022639 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.022621001 podStartE2EDuration="3.022621001s" podCreationTimestamp="2025-12-12 16:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:15:59.017643326 +0000 UTC m=+1786.186282927" watchObservedRunningTime="2025-12-12 16:15:59.022621001 +0000 UTC m=+1786.191260602" Dec 12 16:15:59 crc kubenswrapper[4693]: I1212 16:15:59.049442 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.049420515 podStartE2EDuration="3.049420515s" podCreationTimestamp="2025-12-12 16:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:15:59.035878369 +0000 UTC m=+1786.204517970" watchObservedRunningTime="2025-12-12 16:15:59.049420515 +0000 UTC m=+1786.218060106" Dec 12 16:15:59 crc kubenswrapper[4693]: W1212 16:15:59.091186 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6774670_3fa6_45d1_9174_72ceb917785b.slice/crio-519166189f2dd700587947ff16f42a20ee0fdc85566d795fa259035306220e7e WatchSource:0}: Error finding container 519166189f2dd700587947ff16f42a20ee0fdc85566d795fa259035306220e7e: Status 404 returned error can't find the container with id 519166189f2dd700587947ff16f42a20ee0fdc85566d795fa259035306220e7e Dec 12 16:15:59 crc kubenswrapper[4693]: I1212 16:15:59.099538 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 12 16:15:59 crc kubenswrapper[4693]: I1212 16:15:59.371701 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db313a41-7be9-4215-846e-d480bbdb0186" path="/var/lib/kubelet/pods/db313a41-7be9-4215-846e-d480bbdb0186/volumes" Dec 12 16:15:59 crc kubenswrapper[4693]: I1212 16:15:59.998117 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6774670-3fa6-45d1-9174-72ceb917785b","Type":"ContainerStarted","Data":"519166189f2dd700587947ff16f42a20ee0fdc85566d795fa259035306220e7e"} Dec 12 16:16:00 crc kubenswrapper[4693]: I1212 16:16:00.887389 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 12 16:16:01 crc kubenswrapper[4693]: I1212 16:16:01.030183 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6774670-3fa6-45d1-9174-72ceb917785b","Type":"ContainerStarted","Data":"71275e3fa2b95d3a01099553abf9ab31889d1ba9e763f49446e4cb2603c4e880"} Dec 12 16:16:02 crc kubenswrapper[4693]: I1212 16:16:02.041506 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6774670-3fa6-45d1-9174-72ceb917785b","Type":"ContainerStarted","Data":"d7c05b41d4b991f2f918af8d817c927d7df0ea33b8a9222823725c29af8fe3e1"} Dec 12 16:16:02 crc kubenswrapper[4693]: I1212 16:16:02.042129 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6774670-3fa6-45d1-9174-72ceb917785b","Type":"ContainerStarted","Data":"bb1d3db2879c11b8e55219a6ba9be7661f7b9d5356feebe9fa603e968c6ec7a5"} Dec 12 16:16:02 crc kubenswrapper[4693]: I1212 16:16:02.429414 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 12 16:16:02 crc kubenswrapper[4693]: I1212 16:16:02.455410 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 12 16:16:02 crc kubenswrapper[4693]: I1212 16:16:02.455781 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 12 16:16:03 crc kubenswrapper[4693]: I1212 16:16:03.060252 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6774670-3fa6-45d1-9174-72ceb917785b","Type":"ContainerStarted","Data":"236e1bd78d2389b3c32aa0fd37745fb99ec40786911ba9f2c7a685c2b64d39cb"} Dec 12 16:16:03 crc kubenswrapper[4693]: I1212 16:16:03.092884 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.7107768549999998 podStartE2EDuration="6.092866026s" podCreationTimestamp="2025-12-12 16:15:57 +0000 UTC" firstStartedPulling="2025-12-12 16:15:59.094641528 +0000 UTC m=+1786.263281129" lastFinishedPulling="2025-12-12 16:16:02.476730699 +0000 UTC m=+1789.645370300" observedRunningTime="2025-12-12 16:16:03.084341596 +0000 UTC m=+1790.252981207" watchObservedRunningTime="2025-12-12 16:16:03.092866026 +0000 UTC m=+1790.261505627" Dec 12 16:16:06 crc kubenswrapper[4693]: I1212 16:16:06.079992 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 12 16:16:06 crc kubenswrapper[4693]: I1212 16:16:06.080939 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="0d8c08a3-7925-40d9-afbc-76755b6c0263" containerName="kube-state-metrics" containerID="cri-o://b162c33663d6a2f0b2616e8c1086ae14834c01bc6b2ccc7ecf16122053555837" gracePeriod=30 Dec 12 16:16:06 crc kubenswrapper[4693]: I1212 16:16:06.258312 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Dec 12 16:16:06 crc kubenswrapper[4693]: I1212 16:16:06.258858 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="3d90eb59-c661-4bec-ac19-87304c2c6f00" containerName="mysqld-exporter" containerID="cri-o://b78c3167bff48803366947006054424c261bc302dbca930894845a200b7f9e1b" gracePeriod=30 Dec 12 16:16:06 crc kubenswrapper[4693]: I1212 16:16:06.662520 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 12 16:16:06 crc kubenswrapper[4693]: I1212 16:16:06.691546 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn2lh\" (UniqueName: \"kubernetes.io/projected/0d8c08a3-7925-40d9-afbc-76755b6c0263-kube-api-access-pn2lh\") pod \"0d8c08a3-7925-40d9-afbc-76755b6c0263\" (UID: \"0d8c08a3-7925-40d9-afbc-76755b6c0263\") " Dec 12 16:16:06 crc kubenswrapper[4693]: I1212 16:16:06.706828 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d8c08a3-7925-40d9-afbc-76755b6c0263-kube-api-access-pn2lh" (OuterVolumeSpecName: "kube-api-access-pn2lh") pod "0d8c08a3-7925-40d9-afbc-76755b6c0263" (UID: "0d8c08a3-7925-40d9-afbc-76755b6c0263"). InnerVolumeSpecName "kube-api-access-pn2lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:16:06 crc kubenswrapper[4693]: I1212 16:16:06.795771 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn2lh\" (UniqueName: \"kubernetes.io/projected/0d8c08a3-7925-40d9-afbc-76755b6c0263-kube-api-access-pn2lh\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:06 crc kubenswrapper[4693]: I1212 16:16:06.797291 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Dec 12 16:16:06 crc kubenswrapper[4693]: I1212 16:16:06.897295 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8r57\" (UniqueName: \"kubernetes.io/projected/3d90eb59-c661-4bec-ac19-87304c2c6f00-kube-api-access-d8r57\") pod \"3d90eb59-c661-4bec-ac19-87304c2c6f00\" (UID: \"3d90eb59-c661-4bec-ac19-87304c2c6f00\") " Dec 12 16:16:06 crc kubenswrapper[4693]: I1212 16:16:06.897424 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d90eb59-c661-4bec-ac19-87304c2c6f00-combined-ca-bundle\") pod \"3d90eb59-c661-4bec-ac19-87304c2c6f00\" (UID: \"3d90eb59-c661-4bec-ac19-87304c2c6f00\") " Dec 12 16:16:06 crc kubenswrapper[4693]: I1212 16:16:06.897528 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d90eb59-c661-4bec-ac19-87304c2c6f00-config-data\") pod \"3d90eb59-c661-4bec-ac19-87304c2c6f00\" (UID: \"3d90eb59-c661-4bec-ac19-87304c2c6f00\") " Dec 12 16:16:06 crc kubenswrapper[4693]: I1212 16:16:06.903402 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d90eb59-c661-4bec-ac19-87304c2c6f00-kube-api-access-d8r57" (OuterVolumeSpecName: "kube-api-access-d8r57") pod "3d90eb59-c661-4bec-ac19-87304c2c6f00" (UID: "3d90eb59-c661-4bec-ac19-87304c2c6f00"). InnerVolumeSpecName "kube-api-access-d8r57". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:16:06 crc kubenswrapper[4693]: I1212 16:16:06.932459 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d90eb59-c661-4bec-ac19-87304c2c6f00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d90eb59-c661-4bec-ac19-87304c2c6f00" (UID: "3d90eb59-c661-4bec-ac19-87304c2c6f00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:16:06 crc kubenswrapper[4693]: I1212 16:16:06.963656 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d90eb59-c661-4bec-ac19-87304c2c6f00-config-data" (OuterVolumeSpecName: "config-data") pod "3d90eb59-c661-4bec-ac19-87304c2c6f00" (UID: "3d90eb59-c661-4bec-ac19-87304c2c6f00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.000730 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8r57\" (UniqueName: \"kubernetes.io/projected/3d90eb59-c661-4bec-ac19-87304c2c6f00-kube-api-access-d8r57\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.000767 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d90eb59-c661-4bec-ac19-87304c2c6f00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.000779 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d90eb59-c661-4bec-ac19-87304c2c6f00-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.109958 4693 generic.go:334] "Generic (PLEG): container finished" podID="0d8c08a3-7925-40d9-afbc-76755b6c0263" containerID="b162c33663d6a2f0b2616e8c1086ae14834c01bc6b2ccc7ecf16122053555837" exitCode=2 Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.110027 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.110051 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0d8c08a3-7925-40d9-afbc-76755b6c0263","Type":"ContainerDied","Data":"b162c33663d6a2f0b2616e8c1086ae14834c01bc6b2ccc7ecf16122053555837"} Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.110095 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0d8c08a3-7925-40d9-afbc-76755b6c0263","Type":"ContainerDied","Data":"bd1eea82fd61550040eda29a85c53aea6ed1518251559670f30e067dce797ef2"} Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.110114 4693 scope.go:117] "RemoveContainer" containerID="b162c33663d6a2f0b2616e8c1086ae14834c01bc6b2ccc7ecf16122053555837" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.112446 4693 generic.go:334] "Generic (PLEG): container finished" podID="3d90eb59-c661-4bec-ac19-87304c2c6f00" containerID="b78c3167bff48803366947006054424c261bc302dbca930894845a200b7f9e1b" exitCode=2 Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.112494 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.112506 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"3d90eb59-c661-4bec-ac19-87304c2c6f00","Type":"ContainerDied","Data":"b78c3167bff48803366947006054424c261bc302dbca930894845a200b7f9e1b"} Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.112732 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"3d90eb59-c661-4bec-ac19-87304c2c6f00","Type":"ContainerDied","Data":"ab656abfc2a38b46d29c6ac206e2baa8b35190f9bef323e090599e352e76a4f8"} Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.173706 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.181666 4693 scope.go:117] "RemoveContainer" containerID="b162c33663d6a2f0b2616e8c1086ae14834c01bc6b2ccc7ecf16122053555837" Dec 12 16:16:07 crc kubenswrapper[4693]: E1212 16:16:07.185145 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b162c33663d6a2f0b2616e8c1086ae14834c01bc6b2ccc7ecf16122053555837\": container with ID starting with b162c33663d6a2f0b2616e8c1086ae14834c01bc6b2ccc7ecf16122053555837 not found: ID does not exist" containerID="b162c33663d6a2f0b2616e8c1086ae14834c01bc6b2ccc7ecf16122053555837" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.185192 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b162c33663d6a2f0b2616e8c1086ae14834c01bc6b2ccc7ecf16122053555837"} err="failed to get container status \"b162c33663d6a2f0b2616e8c1086ae14834c01bc6b2ccc7ecf16122053555837\": rpc error: code = NotFound desc = could not find container \"b162c33663d6a2f0b2616e8c1086ae14834c01bc6b2ccc7ecf16122053555837\": container with ID starting with b162c33663d6a2f0b2616e8c1086ae14834c01bc6b2ccc7ecf16122053555837 not found: ID does not exist" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.185219 4693 scope.go:117] "RemoveContainer" containerID="b78c3167bff48803366947006054424c261bc302dbca930894845a200b7f9e1b" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.202004 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.222317 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.237782 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.250116 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 12 16:16:07 crc kubenswrapper[4693]: E1212 16:16:07.251569 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d8c08a3-7925-40d9-afbc-76755b6c0263" containerName="kube-state-metrics" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.251593 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d8c08a3-7925-40d9-afbc-76755b6c0263" containerName="kube-state-metrics" Dec 12 16:16:07 crc kubenswrapper[4693]: E1212 16:16:07.251638 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d90eb59-c661-4bec-ac19-87304c2c6f00" containerName="mysqld-exporter" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.251646 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d90eb59-c661-4bec-ac19-87304c2c6f00" containerName="mysqld-exporter" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.251911 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d90eb59-c661-4bec-ac19-87304c2c6f00" containerName="mysqld-exporter" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.251947 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d8c08a3-7925-40d9-afbc-76755b6c0263" containerName="kube-state-metrics" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.252836 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.256556 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.256821 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.283892 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.285818 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.290796 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.291392 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.299353 4693 scope.go:117] "RemoveContainer" containerID="b78c3167bff48803366947006054424c261bc302dbca930894845a200b7f9e1b" Dec 12 16:16:07 crc kubenswrapper[4693]: E1212 16:16:07.299804 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b78c3167bff48803366947006054424c261bc302dbca930894845a200b7f9e1b\": container with ID starting with b78c3167bff48803366947006054424c261bc302dbca930894845a200b7f9e1b not found: ID does not exist" containerID="b78c3167bff48803366947006054424c261bc302dbca930894845a200b7f9e1b" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.299852 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b78c3167bff48803366947006054424c261bc302dbca930894845a200b7f9e1b"} err="failed to get container status \"b78c3167bff48803366947006054424c261bc302dbca930894845a200b7f9e1b\": rpc error: code = NotFound desc = could not find container \"b78c3167bff48803366947006054424c261bc302dbca930894845a200b7f9e1b\": container with ID starting with b78c3167bff48803366947006054424c261bc302dbca930894845a200b7f9e1b not found: ID does not exist" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.306772 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmxks\" (UniqueName: \"kubernetes.io/projected/cd076b50-0211-4876-b7df-b7140ebac121-kube-api-access-hmxks\") pod \"kube-state-metrics-0\" (UID: \"cd076b50-0211-4876-b7df-b7140ebac121\") " pod="openstack/kube-state-metrics-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.306955 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd076b50-0211-4876-b7df-b7140ebac121-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cd076b50-0211-4876-b7df-b7140ebac121\") " pod="openstack/kube-state-metrics-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.306980 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cd076b50-0211-4876-b7df-b7140ebac121-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cd076b50-0211-4876-b7df-b7140ebac121\") " pod="openstack/kube-state-metrics-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.307008 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd076b50-0211-4876-b7df-b7140ebac121-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cd076b50-0211-4876-b7df-b7140ebac121\") " pod="openstack/kube-state-metrics-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.310168 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.321819 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.410060 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd076b50-0211-4876-b7df-b7140ebac121-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cd076b50-0211-4876-b7df-b7140ebac121\") " pod="openstack/kube-state-metrics-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.410120 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cd076b50-0211-4876-b7df-b7140ebac121-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cd076b50-0211-4876-b7df-b7140ebac121\") " pod="openstack/kube-state-metrics-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.410155 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/19aa1452-554a-42bc-932c-01383d930ccf-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"19aa1452-554a-42bc-932c-01383d930ccf\") " pod="openstack/mysqld-exporter-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.410175 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd076b50-0211-4876-b7df-b7140ebac121-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cd076b50-0211-4876-b7df-b7140ebac121\") " pod="openstack/kube-state-metrics-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.410248 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19aa1452-554a-42bc-932c-01383d930ccf-config-data\") pod \"mysqld-exporter-0\" (UID: \"19aa1452-554a-42bc-932c-01383d930ccf\") " pod="openstack/mysqld-exporter-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.410320 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmxks\" (UniqueName: \"kubernetes.io/projected/cd076b50-0211-4876-b7df-b7140ebac121-kube-api-access-hmxks\") pod \"kube-state-metrics-0\" (UID: \"cd076b50-0211-4876-b7df-b7140ebac121\") " pod="openstack/kube-state-metrics-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.410339 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r7nl\" (UniqueName: \"kubernetes.io/projected/19aa1452-554a-42bc-932c-01383d930ccf-kube-api-access-6r7nl\") pod \"mysqld-exporter-0\" (UID: \"19aa1452-554a-42bc-932c-01383d930ccf\") " pod="openstack/mysqld-exporter-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.410573 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19aa1452-554a-42bc-932c-01383d930ccf-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"19aa1452-554a-42bc-932c-01383d930ccf\") " pod="openstack/mysqld-exporter-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.418481 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cd076b50-0211-4876-b7df-b7140ebac121-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cd076b50-0211-4876-b7df-b7140ebac121\") " pod="openstack/kube-state-metrics-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.430113 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d8c08a3-7925-40d9-afbc-76755b6c0263" path="/var/lib/kubelet/pods/0d8c08a3-7925-40d9-afbc-76755b6c0263/volumes" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.430903 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d90eb59-c661-4bec-ac19-87304c2c6f00" path="/var/lib/kubelet/pods/3d90eb59-c661-4bec-ac19-87304c2c6f00/volumes" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.431594 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.431620 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.431630 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.450031 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd076b50-0211-4876-b7df-b7140ebac121-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cd076b50-0211-4876-b7df-b7140ebac121\") " pod="openstack/kube-state-metrics-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.454138 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmxks\" (UniqueName: \"kubernetes.io/projected/cd076b50-0211-4876-b7df-b7140ebac121-kube-api-access-hmxks\") pod \"kube-state-metrics-0\" (UID: \"cd076b50-0211-4876-b7df-b7140ebac121\") " pod="openstack/kube-state-metrics-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.479982 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.483148 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd076b50-0211-4876-b7df-b7140ebac121-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cd076b50-0211-4876-b7df-b7140ebac121\") " pod="openstack/kube-state-metrics-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.484559 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.484580 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.513944 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19aa1452-554a-42bc-932c-01383d930ccf-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"19aa1452-554a-42bc-932c-01383d930ccf\") " pod="openstack/mysqld-exporter-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.514067 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/19aa1452-554a-42bc-932c-01383d930ccf-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"19aa1452-554a-42bc-932c-01383d930ccf\") " pod="openstack/mysqld-exporter-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.514134 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19aa1452-554a-42bc-932c-01383d930ccf-config-data\") pod \"mysqld-exporter-0\" (UID: \"19aa1452-554a-42bc-932c-01383d930ccf\") " pod="openstack/mysqld-exporter-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.514205 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r7nl\" (UniqueName: \"kubernetes.io/projected/19aa1452-554a-42bc-932c-01383d930ccf-kube-api-access-6r7nl\") pod \"mysqld-exporter-0\" (UID: \"19aa1452-554a-42bc-932c-01383d930ccf\") " pod="openstack/mysqld-exporter-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.538851 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/19aa1452-554a-42bc-932c-01383d930ccf-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"19aa1452-554a-42bc-932c-01383d930ccf\") " pod="openstack/mysqld-exporter-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.541011 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r7nl\" (UniqueName: \"kubernetes.io/projected/19aa1452-554a-42bc-932c-01383d930ccf-kube-api-access-6r7nl\") pod \"mysqld-exporter-0\" (UID: \"19aa1452-554a-42bc-932c-01383d930ccf\") " pod="openstack/mysqld-exporter-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.555040 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19aa1452-554a-42bc-932c-01383d930ccf-config-data\") pod \"mysqld-exporter-0\" (UID: \"19aa1452-554a-42bc-932c-01383d930ccf\") " pod="openstack/mysqld-exporter-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.561044 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19aa1452-554a-42bc-932c-01383d930ccf-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"19aa1452-554a-42bc-932c-01383d930ccf\") " pod="openstack/mysqld-exporter-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.609924 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 12 16:16:07 crc kubenswrapper[4693]: I1212 16:16:07.613692 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Dec 12 16:16:08 crc kubenswrapper[4693]: I1212 16:16:08.182543 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 12 16:16:08 crc kubenswrapper[4693]: I1212 16:16:08.206491 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Dec 12 16:16:08 crc kubenswrapper[4693]: I1212 16:16:08.241458 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 12 16:16:08 crc kubenswrapper[4693]: I1212 16:16:08.490551 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="06c297cc-3159-4a2c-a570-3cf346e9d1a6" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.254:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 16:16:08 crc kubenswrapper[4693]: I1212 16:16:08.490577 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="06c297cc-3159-4a2c-a570-3cf346e9d1a6" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.254:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 16:16:08 crc kubenswrapper[4693]: I1212 16:16:08.538520 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8f539c84-dbec-48db-9cd8-4c77cad8a9ea" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.0:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 16:16:08 crc kubenswrapper[4693]: I1212 16:16:08.538520 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8f539c84-dbec-48db-9cd8-4c77cad8a9ea" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.0:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 16:16:09 crc kubenswrapper[4693]: I1212 16:16:09.052706 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:16:09 crc kubenswrapper[4693]: I1212 16:16:09.053280 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a36bedaf-490a-4ae4-bb82-653f88f8d791" containerName="ceilometer-central-agent" containerID="cri-o://5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849" gracePeriod=30 Dec 12 16:16:09 crc kubenswrapper[4693]: I1212 16:16:09.053312 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a36bedaf-490a-4ae4-bb82-653f88f8d791" containerName="ceilometer-notification-agent" containerID="cri-o://9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae" gracePeriod=30 Dec 12 16:16:09 crc kubenswrapper[4693]: I1212 16:16:09.053320 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a36bedaf-490a-4ae4-bb82-653f88f8d791" containerName="proxy-httpd" containerID="cri-o://48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a" gracePeriod=30 Dec 12 16:16:09 crc kubenswrapper[4693]: I1212 16:16:09.053294 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a36bedaf-490a-4ae4-bb82-653f88f8d791" containerName="sg-core" containerID="cri-o://0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4" gracePeriod=30 Dec 12 16:16:09 crc kubenswrapper[4693]: I1212 16:16:09.138725 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"19aa1452-554a-42bc-932c-01383d930ccf","Type":"ContainerStarted","Data":"f7308bb52ceae6a8fb81bf93d9caec33d6eeca9cdf8781d7907bb2cdacd0279c"} Dec 12 16:16:09 crc kubenswrapper[4693]: I1212 16:16:09.138994 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"19aa1452-554a-42bc-932c-01383d930ccf","Type":"ContainerStarted","Data":"b3c9f849707dfad88121cffd0a434411ac8095269124e1f0e10d40dfaad898b2"} Dec 12 16:16:09 crc kubenswrapper[4693]: I1212 16:16:09.140802 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cd076b50-0211-4876-b7df-b7140ebac121","Type":"ContainerStarted","Data":"c048ca5dfdbb7ab3e02258928c08d02fced1533e48af292164ae3ca630bc8112"} Dec 12 16:16:09 crc kubenswrapper[4693]: I1212 16:16:09.140872 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cd076b50-0211-4876-b7df-b7140ebac121","Type":"ContainerStarted","Data":"90f9c1e0b2cbb763e18ac5abce08f1bcd6ce6c2b0e7a4ca6f9357e7435e07f77"} Dec 12 16:16:09 crc kubenswrapper[4693]: I1212 16:16:09.163622 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=1.6258809570000001 podStartE2EDuration="2.163601863s" podCreationTimestamp="2025-12-12 16:16:07 +0000 UTC" firstStartedPulling="2025-12-12 16:16:08.192390618 +0000 UTC m=+1795.361030219" lastFinishedPulling="2025-12-12 16:16:08.730111524 +0000 UTC m=+1795.898751125" observedRunningTime="2025-12-12 16:16:09.154519837 +0000 UTC m=+1796.323159438" watchObservedRunningTime="2025-12-12 16:16:09.163601863 +0000 UTC m=+1796.332241464" Dec 12 16:16:09 crc kubenswrapper[4693]: I1212 16:16:09.208375 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.838900534 podStartE2EDuration="2.208345862s" podCreationTimestamp="2025-12-12 16:16:07 +0000 UTC" firstStartedPulling="2025-12-12 16:16:08.199943242 +0000 UTC m=+1795.368582843" lastFinishedPulling="2025-12-12 16:16:08.56938856 +0000 UTC m=+1795.738028171" observedRunningTime="2025-12-12 16:16:09.186248815 +0000 UTC m=+1796.354888416" watchObservedRunningTime="2025-12-12 16:16:09.208345862 +0000 UTC m=+1796.376985473" Dec 12 16:16:09 crc kubenswrapper[4693]: E1212 16:16:09.549025 4693 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda36bedaf_490a_4ae4_bb82_653f88f8d791.slice/crio-conmon-48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a.scope\": RecentStats: unable to find data in memory cache]" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.025090 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.096315 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-combined-ca-bundle\") pod \"a36bedaf-490a-4ae4-bb82-653f88f8d791\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.096369 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtmht\" (UniqueName: \"kubernetes.io/projected/a36bedaf-490a-4ae4-bb82-653f88f8d791-kube-api-access-gtmht\") pod \"a36bedaf-490a-4ae4-bb82-653f88f8d791\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.096455 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-scripts\") pod \"a36bedaf-490a-4ae4-bb82-653f88f8d791\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.096501 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a36bedaf-490a-4ae4-bb82-653f88f8d791-log-httpd\") pod \"a36bedaf-490a-4ae4-bb82-653f88f8d791\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.096537 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a36bedaf-490a-4ae4-bb82-653f88f8d791-run-httpd\") pod \"a36bedaf-490a-4ae4-bb82-653f88f8d791\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.096567 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-sg-core-conf-yaml\") pod \"a36bedaf-490a-4ae4-bb82-653f88f8d791\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.096596 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-config-data\") pod \"a36bedaf-490a-4ae4-bb82-653f88f8d791\" (UID: \"a36bedaf-490a-4ae4-bb82-653f88f8d791\") " Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.097293 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a36bedaf-490a-4ae4-bb82-653f88f8d791-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a36bedaf-490a-4ae4-bb82-653f88f8d791" (UID: "a36bedaf-490a-4ae4-bb82-653f88f8d791"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.097353 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a36bedaf-490a-4ae4-bb82-653f88f8d791-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a36bedaf-490a-4ae4-bb82-653f88f8d791" (UID: "a36bedaf-490a-4ae4-bb82-653f88f8d791"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.110575 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a36bedaf-490a-4ae4-bb82-653f88f8d791-kube-api-access-gtmht" (OuterVolumeSpecName: "kube-api-access-gtmht") pod "a36bedaf-490a-4ae4-bb82-653f88f8d791" (UID: "a36bedaf-490a-4ae4-bb82-653f88f8d791"). InnerVolumeSpecName "kube-api-access-gtmht". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.114572 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-scripts" (OuterVolumeSpecName: "scripts") pod "a36bedaf-490a-4ae4-bb82-653f88f8d791" (UID: "a36bedaf-490a-4ae4-bb82-653f88f8d791"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.166711 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a36bedaf-490a-4ae4-bb82-653f88f8d791" (UID: "a36bedaf-490a-4ae4-bb82-653f88f8d791"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.169510 4693 generic.go:334] "Generic (PLEG): container finished" podID="a36bedaf-490a-4ae4-bb82-653f88f8d791" containerID="48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a" exitCode=0 Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.169548 4693 generic.go:334] "Generic (PLEG): container finished" podID="a36bedaf-490a-4ae4-bb82-653f88f8d791" containerID="0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4" exitCode=2 Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.169556 4693 generic.go:334] "Generic (PLEG): container finished" podID="a36bedaf-490a-4ae4-bb82-653f88f8d791" containerID="9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae" exitCode=0 Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.169563 4693 generic.go:334] "Generic (PLEG): container finished" podID="a36bedaf-490a-4ae4-bb82-653f88f8d791" containerID="5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849" exitCode=0 Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.170858 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.171458 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a36bedaf-490a-4ae4-bb82-653f88f8d791","Type":"ContainerDied","Data":"48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a"} Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.171494 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.171505 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a36bedaf-490a-4ae4-bb82-653f88f8d791","Type":"ContainerDied","Data":"0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4"} Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.171514 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a36bedaf-490a-4ae4-bb82-653f88f8d791","Type":"ContainerDied","Data":"9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae"} Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.171524 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a36bedaf-490a-4ae4-bb82-653f88f8d791","Type":"ContainerDied","Data":"5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849"} Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.171532 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a36bedaf-490a-4ae4-bb82-653f88f8d791","Type":"ContainerDied","Data":"9eba8295e180ce2ccf33fc9049b2e76fc93903895f58ff04f0e4ffd722db0340"} Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.171592 4693 scope.go:117] "RemoveContainer" containerID="48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.204052 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtmht\" (UniqueName: \"kubernetes.io/projected/a36bedaf-490a-4ae4-bb82-653f88f8d791-kube-api-access-gtmht\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.204082 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.204091 4693 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a36bedaf-490a-4ae4-bb82-653f88f8d791-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.204102 4693 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a36bedaf-490a-4ae4-bb82-653f88f8d791-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.204111 4693 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.205607 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a36bedaf-490a-4ae4-bb82-653f88f8d791" (UID: "a36bedaf-490a-4ae4-bb82-653f88f8d791"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.218641 4693 scope.go:117] "RemoveContainer" containerID="0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.284222 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-config-data" (OuterVolumeSpecName: "config-data") pod "a36bedaf-490a-4ae4-bb82-653f88f8d791" (UID: "a36bedaf-490a-4ae4-bb82-653f88f8d791"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.308663 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.308710 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a36bedaf-490a-4ae4-bb82-653f88f8d791-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.385527 4693 scope.go:117] "RemoveContainer" containerID="9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.417304 4693 scope.go:117] "RemoveContainer" containerID="5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.440597 4693 scope.go:117] "RemoveContainer" containerID="48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a" Dec 12 16:16:10 crc kubenswrapper[4693]: E1212 16:16:10.441183 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a\": container with ID starting with 48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a not found: ID does not exist" containerID="48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.441249 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a"} err="failed to get container status \"48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a\": rpc error: code = NotFound desc = could not find container \"48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a\": container with ID starting with 48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a not found: ID does not exist" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.441310 4693 scope.go:117] "RemoveContainer" containerID="0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4" Dec 12 16:16:10 crc kubenswrapper[4693]: E1212 16:16:10.441709 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4\": container with ID starting with 0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4 not found: ID does not exist" containerID="0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.441735 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4"} err="failed to get container status \"0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4\": rpc error: code = NotFound desc = could not find container \"0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4\": container with ID starting with 0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4 not found: ID does not exist" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.441754 4693 scope.go:117] "RemoveContainer" containerID="9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae" Dec 12 16:16:10 crc kubenswrapper[4693]: E1212 16:16:10.441962 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae\": container with ID starting with 9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae not found: ID does not exist" containerID="9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.442011 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae"} err="failed to get container status \"9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae\": rpc error: code = NotFound desc = could not find container \"9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae\": container with ID starting with 9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae not found: ID does not exist" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.442029 4693 scope.go:117] "RemoveContainer" containerID="5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849" Dec 12 16:16:10 crc kubenswrapper[4693]: E1212 16:16:10.442391 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849\": container with ID starting with 5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849 not found: ID does not exist" containerID="5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.442412 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849"} err="failed to get container status \"5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849\": rpc error: code = NotFound desc = could not find container \"5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849\": container with ID starting with 5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849 not found: ID does not exist" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.442427 4693 scope.go:117] "RemoveContainer" containerID="48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.442665 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a"} err="failed to get container status \"48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a\": rpc error: code = NotFound desc = could not find container \"48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a\": container with ID starting with 48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a not found: ID does not exist" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.442690 4693 scope.go:117] "RemoveContainer" containerID="0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.442893 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4"} err="failed to get container status \"0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4\": rpc error: code = NotFound desc = could not find container \"0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4\": container with ID starting with 0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4 not found: ID does not exist" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.442915 4693 scope.go:117] "RemoveContainer" containerID="9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.443456 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae"} err="failed to get container status \"9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae\": rpc error: code = NotFound desc = could not find container \"9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae\": container with ID starting with 9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae not found: ID does not exist" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.443483 4693 scope.go:117] "RemoveContainer" containerID="5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.443728 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849"} err="failed to get container status \"5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849\": rpc error: code = NotFound desc = could not find container \"5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849\": container with ID starting with 5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849 not found: ID does not exist" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.443747 4693 scope.go:117] "RemoveContainer" containerID="48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.444010 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a"} err="failed to get container status \"48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a\": rpc error: code = NotFound desc = could not find container \"48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a\": container with ID starting with 48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a not found: ID does not exist" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.444035 4693 scope.go:117] "RemoveContainer" containerID="0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.444290 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4"} err="failed to get container status \"0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4\": rpc error: code = NotFound desc = could not find container \"0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4\": container with ID starting with 0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4 not found: ID does not exist" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.444309 4693 scope.go:117] "RemoveContainer" containerID="9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.444600 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae"} err="failed to get container status \"9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae\": rpc error: code = NotFound desc = could not find container \"9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae\": container with ID starting with 9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae not found: ID does not exist" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.444633 4693 scope.go:117] "RemoveContainer" containerID="5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.444845 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849"} err="failed to get container status \"5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849\": rpc error: code = NotFound desc = could not find container \"5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849\": container with ID starting with 5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849 not found: ID does not exist" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.444865 4693 scope.go:117] "RemoveContainer" containerID="48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.445135 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a"} err="failed to get container status \"48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a\": rpc error: code = NotFound desc = could not find container \"48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a\": container with ID starting with 48c8a932e362e4960aee5a24527124ab1cdc4146bd6f6afd43c084106dba997a not found: ID does not exist" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.445157 4693 scope.go:117] "RemoveContainer" containerID="0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.445389 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4"} err="failed to get container status \"0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4\": rpc error: code = NotFound desc = could not find container \"0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4\": container with ID starting with 0df95a0bd631b2891c70f5e3f510c580e842c3ad56d679309026ffe3168e76d4 not found: ID does not exist" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.445412 4693 scope.go:117] "RemoveContainer" containerID="9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.445616 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae"} err="failed to get container status \"9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae\": rpc error: code = NotFound desc = could not find container \"9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae\": container with ID starting with 9648aa7ec89730a61890e0d2b3e667eacbbd399baba4443e02087ba36f5815ae not found: ID does not exist" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.445637 4693 scope.go:117] "RemoveContainer" containerID="5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.445886 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849"} err="failed to get container status \"5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849\": rpc error: code = NotFound desc = could not find container \"5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849\": container with ID starting with 5d29bc1a982a1d8e1d80ae03e8ea53f4f6c2cdf482b9f57d3569747591753849 not found: ID does not exist" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.509957 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.538486 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.580515 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:16:10 crc kubenswrapper[4693]: E1212 16:16:10.581045 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a36bedaf-490a-4ae4-bb82-653f88f8d791" containerName="ceilometer-central-agent" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.581065 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="a36bedaf-490a-4ae4-bb82-653f88f8d791" containerName="ceilometer-central-agent" Dec 12 16:16:10 crc kubenswrapper[4693]: E1212 16:16:10.581091 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a36bedaf-490a-4ae4-bb82-653f88f8d791" containerName="proxy-httpd" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.581099 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="a36bedaf-490a-4ae4-bb82-653f88f8d791" containerName="proxy-httpd" Dec 12 16:16:10 crc kubenswrapper[4693]: E1212 16:16:10.581110 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a36bedaf-490a-4ae4-bb82-653f88f8d791" containerName="ceilometer-notification-agent" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.581117 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="a36bedaf-490a-4ae4-bb82-653f88f8d791" containerName="ceilometer-notification-agent" Dec 12 16:16:10 crc kubenswrapper[4693]: E1212 16:16:10.581132 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a36bedaf-490a-4ae4-bb82-653f88f8d791" containerName="sg-core" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.581137 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="a36bedaf-490a-4ae4-bb82-653f88f8d791" containerName="sg-core" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.581378 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="a36bedaf-490a-4ae4-bb82-653f88f8d791" containerName="ceilometer-central-agent" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.581404 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="a36bedaf-490a-4ae4-bb82-653f88f8d791" containerName="sg-core" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.581416 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="a36bedaf-490a-4ae4-bb82-653f88f8d791" containerName="proxy-httpd" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.581438 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="a36bedaf-490a-4ae4-bb82-653f88f8d791" containerName="ceilometer-notification-agent" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.583532 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.587787 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.588043 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.588196 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.600495 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.623960 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3262795d-61c5-42aa-aed3-887cb1864d7f-log-httpd\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.624058 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3262795d-61c5-42aa-aed3-887cb1864d7f-run-httpd\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.624086 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-scripts\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.624119 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.624146 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-config-data\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.624169 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcxxp\" (UniqueName: \"kubernetes.io/projected/3262795d-61c5-42aa-aed3-887cb1864d7f-kube-api-access-rcxxp\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.624237 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.624258 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.727007 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3262795d-61c5-42aa-aed3-887cb1864d7f-run-httpd\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.727051 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-scripts\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.727086 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.727121 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-config-data\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.727155 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcxxp\" (UniqueName: \"kubernetes.io/projected/3262795d-61c5-42aa-aed3-887cb1864d7f-kube-api-access-rcxxp\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.727238 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.727261 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.727332 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3262795d-61c5-42aa-aed3-887cb1864d7f-log-httpd\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.727450 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3262795d-61c5-42aa-aed3-887cb1864d7f-run-httpd\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.727643 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3262795d-61c5-42aa-aed3-887cb1864d7f-log-httpd\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.732972 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.733995 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.734638 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-config-data\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.734647 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-scripts\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.744505 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.749971 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcxxp\" (UniqueName: \"kubernetes.io/projected/3262795d-61c5-42aa-aed3-887cb1864d7f-kube-api-access-rcxxp\") pod \"ceilometer-0\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " pod="openstack/ceilometer-0" Dec 12 16:16:10 crc kubenswrapper[4693]: I1212 16:16:10.905376 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:16:11 crc kubenswrapper[4693]: I1212 16:16:11.372915 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a36bedaf-490a-4ae4-bb82-653f88f8d791" path="/var/lib/kubelet/pods/a36bedaf-490a-4ae4-bb82-653f88f8d791/volumes" Dec 12 16:16:11 crc kubenswrapper[4693]: I1212 16:16:11.410267 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:16:12 crc kubenswrapper[4693]: I1212 16:16:12.198999 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3262795d-61c5-42aa-aed3-887cb1864d7f","Type":"ContainerStarted","Data":"9b01d28e16b5f450f1d91f2a50ff5f6141636d4ba749f4d4907d65374bd84491"} Dec 12 16:16:12 crc kubenswrapper[4693]: I1212 16:16:12.199599 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3262795d-61c5-42aa-aed3-887cb1864d7f","Type":"ContainerStarted","Data":"1885f3ce3922d21562c14a31ffff9057bdbd0d027f02d517fa4982866569b97e"} Dec 12 16:16:12 crc kubenswrapper[4693]: I1212 16:16:12.357417 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:16:12 crc kubenswrapper[4693]: E1212 16:16:12.370400 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:16:13 crc kubenswrapper[4693]: I1212 16:16:13.211542 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3262795d-61c5-42aa-aed3-887cb1864d7f","Type":"ContainerStarted","Data":"90c67448afc6ddacbc18ae66f889c063358de7b1edbb779b434ffbef21141d67"} Dec 12 16:16:14 crc kubenswrapper[4693]: I1212 16:16:14.227837 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3262795d-61c5-42aa-aed3-887cb1864d7f","Type":"ContainerStarted","Data":"04197b6fd184dbbab4c9011b5e90eb73bd85f4d56687f75c405e31b05e5021bb"} Dec 12 16:16:15 crc kubenswrapper[4693]: I1212 16:16:15.247307 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3262795d-61c5-42aa-aed3-887cb1864d7f","Type":"ContainerStarted","Data":"b427c51ade48afcb1d5163c49cde4246bdce9328b58a789be13f65f8fda6ecd7"} Dec 12 16:16:15 crc kubenswrapper[4693]: I1212 16:16:15.248091 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 12 16:16:15 crc kubenswrapper[4693]: I1212 16:16:15.312687 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8669511170000002 podStartE2EDuration="5.312659638s" podCreationTimestamp="2025-12-12 16:16:10 +0000 UTC" firstStartedPulling="2025-12-12 16:16:11.408807791 +0000 UTC m=+1798.577447392" lastFinishedPulling="2025-12-12 16:16:14.854516312 +0000 UTC m=+1802.023155913" observedRunningTime="2025-12-12 16:16:15.266952362 +0000 UTC m=+1802.435591963" watchObservedRunningTime="2025-12-12 16:16:15.312659638 +0000 UTC m=+1802.481299239" Dec 12 16:16:17 crc kubenswrapper[4693]: I1212 16:16:17.422898 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 12 16:16:17 crc kubenswrapper[4693]: I1212 16:16:17.424564 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 12 16:16:17 crc kubenswrapper[4693]: I1212 16:16:17.429986 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 12 16:16:17 crc kubenswrapper[4693]: I1212 16:16:17.433549 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 12 16:16:17 crc kubenswrapper[4693]: I1212 16:16:17.468595 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 12 16:16:17 crc kubenswrapper[4693]: I1212 16:16:17.471915 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 12 16:16:17 crc kubenswrapper[4693]: I1212 16:16:17.480133 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 12 16:16:17 crc kubenswrapper[4693]: I1212 16:16:17.626202 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 12 16:16:18 crc kubenswrapper[4693]: I1212 16:16:18.285218 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 12 16:16:18 crc kubenswrapper[4693]: I1212 16:16:18.298862 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 12 16:16:18 crc kubenswrapper[4693]: I1212 16:16:18.299333 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 12 16:16:26 crc kubenswrapper[4693]: I1212 16:16:26.358019 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:16:26 crc kubenswrapper[4693]: E1212 16:16:26.359166 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:16:40 crc kubenswrapper[4693]: I1212 16:16:40.357497 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:16:40 crc kubenswrapper[4693]: E1212 16:16:40.358330 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:16:40 crc kubenswrapper[4693]: I1212 16:16:40.968205 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 12 16:16:51 crc kubenswrapper[4693]: I1212 16:16:51.357474 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:16:51 crc kubenswrapper[4693]: E1212 16:16:51.358256 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:16:52 crc kubenswrapper[4693]: I1212 16:16:52.320875 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-wrwhd"] Dec 12 16:16:52 crc kubenswrapper[4693]: I1212 16:16:52.332825 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-wrwhd"] Dec 12 16:16:52 crc kubenswrapper[4693]: I1212 16:16:52.430877 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-h5zgl"] Dec 12 16:16:52 crc kubenswrapper[4693]: I1212 16:16:52.434461 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-h5zgl" Dec 12 16:16:52 crc kubenswrapper[4693]: I1212 16:16:52.463152 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-h5zgl"] Dec 12 16:16:52 crc kubenswrapper[4693]: I1212 16:16:52.573736 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e457e88c-30e2-45af-8a1c-d3056402343b-combined-ca-bundle\") pod \"heat-db-sync-h5zgl\" (UID: \"e457e88c-30e2-45af-8a1c-d3056402343b\") " pod="openstack/heat-db-sync-h5zgl" Dec 12 16:16:52 crc kubenswrapper[4693]: I1212 16:16:52.573784 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tnqt\" (UniqueName: \"kubernetes.io/projected/e457e88c-30e2-45af-8a1c-d3056402343b-kube-api-access-5tnqt\") pod \"heat-db-sync-h5zgl\" (UID: \"e457e88c-30e2-45af-8a1c-d3056402343b\") " pod="openstack/heat-db-sync-h5zgl" Dec 12 16:16:52 crc kubenswrapper[4693]: I1212 16:16:52.573992 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e457e88c-30e2-45af-8a1c-d3056402343b-config-data\") pod \"heat-db-sync-h5zgl\" (UID: \"e457e88c-30e2-45af-8a1c-d3056402343b\") " pod="openstack/heat-db-sync-h5zgl" Dec 12 16:16:52 crc kubenswrapper[4693]: I1212 16:16:52.676372 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e457e88c-30e2-45af-8a1c-d3056402343b-combined-ca-bundle\") pod \"heat-db-sync-h5zgl\" (UID: \"e457e88c-30e2-45af-8a1c-d3056402343b\") " pod="openstack/heat-db-sync-h5zgl" Dec 12 16:16:52 crc kubenswrapper[4693]: I1212 16:16:52.676449 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tnqt\" (UniqueName: \"kubernetes.io/projected/e457e88c-30e2-45af-8a1c-d3056402343b-kube-api-access-5tnqt\") pod \"heat-db-sync-h5zgl\" (UID: \"e457e88c-30e2-45af-8a1c-d3056402343b\") " pod="openstack/heat-db-sync-h5zgl" Dec 12 16:16:52 crc kubenswrapper[4693]: I1212 16:16:52.676547 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e457e88c-30e2-45af-8a1c-d3056402343b-config-data\") pod \"heat-db-sync-h5zgl\" (UID: \"e457e88c-30e2-45af-8a1c-d3056402343b\") " pod="openstack/heat-db-sync-h5zgl" Dec 12 16:16:52 crc kubenswrapper[4693]: I1212 16:16:52.683590 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e457e88c-30e2-45af-8a1c-d3056402343b-config-data\") pod \"heat-db-sync-h5zgl\" (UID: \"e457e88c-30e2-45af-8a1c-d3056402343b\") " pod="openstack/heat-db-sync-h5zgl" Dec 12 16:16:52 crc kubenswrapper[4693]: I1212 16:16:52.689149 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e457e88c-30e2-45af-8a1c-d3056402343b-combined-ca-bundle\") pod \"heat-db-sync-h5zgl\" (UID: \"e457e88c-30e2-45af-8a1c-d3056402343b\") " pod="openstack/heat-db-sync-h5zgl" Dec 12 16:16:52 crc kubenswrapper[4693]: I1212 16:16:52.700463 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tnqt\" (UniqueName: \"kubernetes.io/projected/e457e88c-30e2-45af-8a1c-d3056402343b-kube-api-access-5tnqt\") pod \"heat-db-sync-h5zgl\" (UID: \"e457e88c-30e2-45af-8a1c-d3056402343b\") " pod="openstack/heat-db-sync-h5zgl" Dec 12 16:16:52 crc kubenswrapper[4693]: I1212 16:16:52.803807 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-h5zgl" Dec 12 16:16:53 crc kubenswrapper[4693]: I1212 16:16:53.377392 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9bc2e28-21b6-42e1-a680-92426ae37ecf" path="/var/lib/kubelet/pods/f9bc2e28-21b6-42e1-a680-92426ae37ecf/volumes" Dec 12 16:16:53 crc kubenswrapper[4693]: I1212 16:16:53.378649 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-h5zgl"] Dec 12 16:16:53 crc kubenswrapper[4693]: W1212 16:16:53.387507 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode457e88c_30e2_45af_8a1c_d3056402343b.slice/crio-8c42ea8ebdd5883b1de6cb0ff3452a05924b273ec45f95d69796796b07ddb1ff WatchSource:0}: Error finding container 8c42ea8ebdd5883b1de6cb0ff3452a05924b273ec45f95d69796796b07ddb1ff: Status 404 returned error can't find the container with id 8c42ea8ebdd5883b1de6cb0ff3452a05924b273ec45f95d69796796b07ddb1ff Dec 12 16:16:53 crc kubenswrapper[4693]: I1212 16:16:53.766476 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-h5zgl" event={"ID":"e457e88c-30e2-45af-8a1c-d3056402343b","Type":"ContainerStarted","Data":"8c42ea8ebdd5883b1de6cb0ff3452a05924b273ec45f95d69796796b07ddb1ff"} Dec 12 16:16:54 crc kubenswrapper[4693]: I1212 16:16:54.655582 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:16:54 crc kubenswrapper[4693]: I1212 16:16:54.656151 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3262795d-61c5-42aa-aed3-887cb1864d7f" containerName="ceilometer-central-agent" containerID="cri-o://9b01d28e16b5f450f1d91f2a50ff5f6141636d4ba749f4d4907d65374bd84491" gracePeriod=30 Dec 12 16:16:54 crc kubenswrapper[4693]: I1212 16:16:54.656308 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3262795d-61c5-42aa-aed3-887cb1864d7f" containerName="proxy-httpd" containerID="cri-o://b427c51ade48afcb1d5163c49cde4246bdce9328b58a789be13f65f8fda6ecd7" gracePeriod=30 Dec 12 16:16:54 crc kubenswrapper[4693]: I1212 16:16:54.656366 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3262795d-61c5-42aa-aed3-887cb1864d7f" containerName="sg-core" containerID="cri-o://04197b6fd184dbbab4c9011b5e90eb73bd85f4d56687f75c405e31b05e5021bb" gracePeriod=30 Dec 12 16:16:54 crc kubenswrapper[4693]: I1212 16:16:54.656413 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3262795d-61c5-42aa-aed3-887cb1864d7f" containerName="ceilometer-notification-agent" containerID="cri-o://90c67448afc6ddacbc18ae66f889c063358de7b1edbb779b434ffbef21141d67" gracePeriod=30 Dec 12 16:16:54 crc kubenswrapper[4693]: I1212 16:16:54.799036 4693 generic.go:334] "Generic (PLEG): container finished" podID="3262795d-61c5-42aa-aed3-887cb1864d7f" containerID="04197b6fd184dbbab4c9011b5e90eb73bd85f4d56687f75c405e31b05e5021bb" exitCode=2 Dec 12 16:16:54 crc kubenswrapper[4693]: I1212 16:16:54.799094 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3262795d-61c5-42aa-aed3-887cb1864d7f","Type":"ContainerDied","Data":"04197b6fd184dbbab4c9011b5e90eb73bd85f4d56687f75c405e31b05e5021bb"} Dec 12 16:16:55 crc kubenswrapper[4693]: I1212 16:16:55.188118 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Dec 12 16:16:55 crc kubenswrapper[4693]: I1212 16:16:55.268415 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 12 16:16:55 crc kubenswrapper[4693]: I1212 16:16:55.815805 4693 generic.go:334] "Generic (PLEG): container finished" podID="3262795d-61c5-42aa-aed3-887cb1864d7f" containerID="b427c51ade48afcb1d5163c49cde4246bdce9328b58a789be13f65f8fda6ecd7" exitCode=0 Dec 12 16:16:55 crc kubenswrapper[4693]: I1212 16:16:55.816112 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3262795d-61c5-42aa-aed3-887cb1864d7f","Type":"ContainerDied","Data":"b427c51ade48afcb1d5163c49cde4246bdce9328b58a789be13f65f8fda6ecd7"} Dec 12 16:16:55 crc kubenswrapper[4693]: I1212 16:16:55.816176 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3262795d-61c5-42aa-aed3-887cb1864d7f","Type":"ContainerDied","Data":"9b01d28e16b5f450f1d91f2a50ff5f6141636d4ba749f4d4907d65374bd84491"} Dec 12 16:16:55 crc kubenswrapper[4693]: I1212 16:16:55.816130 4693 generic.go:334] "Generic (PLEG): container finished" podID="3262795d-61c5-42aa-aed3-887cb1864d7f" containerID="9b01d28e16b5f450f1d91f2a50ff5f6141636d4ba749f4d4907d65374bd84491" exitCode=0 Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.593550 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.712181 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3262795d-61c5-42aa-aed3-887cb1864d7f-run-httpd\") pod \"3262795d-61c5-42aa-aed3-887cb1864d7f\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.712254 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcxxp\" (UniqueName: \"kubernetes.io/projected/3262795d-61c5-42aa-aed3-887cb1864d7f-kube-api-access-rcxxp\") pod \"3262795d-61c5-42aa-aed3-887cb1864d7f\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.712377 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-combined-ca-bundle\") pod \"3262795d-61c5-42aa-aed3-887cb1864d7f\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.712490 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-scripts\") pod \"3262795d-61c5-42aa-aed3-887cb1864d7f\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.712639 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-config-data\") pod \"3262795d-61c5-42aa-aed3-887cb1864d7f\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.712673 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-ceilometer-tls-certs\") pod \"3262795d-61c5-42aa-aed3-887cb1864d7f\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.712708 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-sg-core-conf-yaml\") pod \"3262795d-61c5-42aa-aed3-887cb1864d7f\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.712784 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3262795d-61c5-42aa-aed3-887cb1864d7f-log-httpd\") pod \"3262795d-61c5-42aa-aed3-887cb1864d7f\" (UID: \"3262795d-61c5-42aa-aed3-887cb1864d7f\") " Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.713839 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3262795d-61c5-42aa-aed3-887cb1864d7f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3262795d-61c5-42aa-aed3-887cb1864d7f" (UID: "3262795d-61c5-42aa-aed3-887cb1864d7f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.714741 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3262795d-61c5-42aa-aed3-887cb1864d7f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3262795d-61c5-42aa-aed3-887cb1864d7f" (UID: "3262795d-61c5-42aa-aed3-887cb1864d7f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.725464 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-scripts" (OuterVolumeSpecName: "scripts") pod "3262795d-61c5-42aa-aed3-887cb1864d7f" (UID: "3262795d-61c5-42aa-aed3-887cb1864d7f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.739790 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3262795d-61c5-42aa-aed3-887cb1864d7f-kube-api-access-rcxxp" (OuterVolumeSpecName: "kube-api-access-rcxxp") pod "3262795d-61c5-42aa-aed3-887cb1864d7f" (UID: "3262795d-61c5-42aa-aed3-887cb1864d7f"). InnerVolumeSpecName "kube-api-access-rcxxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.815509 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.815543 4693 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3262795d-61c5-42aa-aed3-887cb1864d7f-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.815552 4693 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3262795d-61c5-42aa-aed3-887cb1864d7f-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.815560 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcxxp\" (UniqueName: \"kubernetes.io/projected/3262795d-61c5-42aa-aed3-887cb1864d7f-kube-api-access-rcxxp\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.824001 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3262795d-61c5-42aa-aed3-887cb1864d7f" (UID: "3262795d-61c5-42aa-aed3-887cb1864d7f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.881426 4693 generic.go:334] "Generic (PLEG): container finished" podID="3262795d-61c5-42aa-aed3-887cb1864d7f" containerID="90c67448afc6ddacbc18ae66f889c063358de7b1edbb779b434ffbef21141d67" exitCode=0 Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.881482 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3262795d-61c5-42aa-aed3-887cb1864d7f","Type":"ContainerDied","Data":"90c67448afc6ddacbc18ae66f889c063358de7b1edbb779b434ffbef21141d67"} Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.881519 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3262795d-61c5-42aa-aed3-887cb1864d7f","Type":"ContainerDied","Data":"1885f3ce3922d21562c14a31ffff9057bdbd0d027f02d517fa4982866569b97e"} Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.881525 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.881538 4693 scope.go:117] "RemoveContainer" containerID="b427c51ade48afcb1d5163c49cde4246bdce9328b58a789be13f65f8fda6ecd7" Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.926493 4693 scope.go:117] "RemoveContainer" containerID="04197b6fd184dbbab4c9011b5e90eb73bd85f4d56687f75c405e31b05e5021bb" Dec 12 16:16:56 crc kubenswrapper[4693]: I1212 16:16:56.927885 4693 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.018907 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3262795d-61c5-42aa-aed3-887cb1864d7f" (UID: "3262795d-61c5-42aa-aed3-887cb1864d7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.055189 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.153665 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3262795d-61c5-42aa-aed3-887cb1864d7f" (UID: "3262795d-61c5-42aa-aed3-887cb1864d7f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.157403 4693 scope.go:117] "RemoveContainer" containerID="90c67448afc6ddacbc18ae66f889c063358de7b1edbb779b434ffbef21141d67" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.158374 4693 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.231473 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-config-data" (OuterVolumeSpecName: "config-data") pod "3262795d-61c5-42aa-aed3-887cb1864d7f" (UID: "3262795d-61c5-42aa-aed3-887cb1864d7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.260779 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3262795d-61c5-42aa-aed3-887cb1864d7f-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.287830 4693 scope.go:117] "RemoveContainer" containerID="9b01d28e16b5f450f1d91f2a50ff5f6141636d4ba749f4d4907d65374bd84491" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.351446 4693 scope.go:117] "RemoveContainer" containerID="b427c51ade48afcb1d5163c49cde4246bdce9328b58a789be13f65f8fda6ecd7" Dec 12 16:16:57 crc kubenswrapper[4693]: E1212 16:16:57.373128 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b427c51ade48afcb1d5163c49cde4246bdce9328b58a789be13f65f8fda6ecd7\": container with ID starting with b427c51ade48afcb1d5163c49cde4246bdce9328b58a789be13f65f8fda6ecd7 not found: ID does not exist" containerID="b427c51ade48afcb1d5163c49cde4246bdce9328b58a789be13f65f8fda6ecd7" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.373178 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b427c51ade48afcb1d5163c49cde4246bdce9328b58a789be13f65f8fda6ecd7"} err="failed to get container status \"b427c51ade48afcb1d5163c49cde4246bdce9328b58a789be13f65f8fda6ecd7\": rpc error: code = NotFound desc = could not find container \"b427c51ade48afcb1d5163c49cde4246bdce9328b58a789be13f65f8fda6ecd7\": container with ID starting with b427c51ade48afcb1d5163c49cde4246bdce9328b58a789be13f65f8fda6ecd7 not found: ID does not exist" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.373205 4693 scope.go:117] "RemoveContainer" containerID="04197b6fd184dbbab4c9011b5e90eb73bd85f4d56687f75c405e31b05e5021bb" Dec 12 16:16:57 crc kubenswrapper[4693]: E1212 16:16:57.382416 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04197b6fd184dbbab4c9011b5e90eb73bd85f4d56687f75c405e31b05e5021bb\": container with ID starting with 04197b6fd184dbbab4c9011b5e90eb73bd85f4d56687f75c405e31b05e5021bb not found: ID does not exist" containerID="04197b6fd184dbbab4c9011b5e90eb73bd85f4d56687f75c405e31b05e5021bb" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.382465 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04197b6fd184dbbab4c9011b5e90eb73bd85f4d56687f75c405e31b05e5021bb"} err="failed to get container status \"04197b6fd184dbbab4c9011b5e90eb73bd85f4d56687f75c405e31b05e5021bb\": rpc error: code = NotFound desc = could not find container \"04197b6fd184dbbab4c9011b5e90eb73bd85f4d56687f75c405e31b05e5021bb\": container with ID starting with 04197b6fd184dbbab4c9011b5e90eb73bd85f4d56687f75c405e31b05e5021bb not found: ID does not exist" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.382497 4693 scope.go:117] "RemoveContainer" containerID="90c67448afc6ddacbc18ae66f889c063358de7b1edbb779b434ffbef21141d67" Dec 12 16:16:57 crc kubenswrapper[4693]: E1212 16:16:57.382940 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90c67448afc6ddacbc18ae66f889c063358de7b1edbb779b434ffbef21141d67\": container with ID starting with 90c67448afc6ddacbc18ae66f889c063358de7b1edbb779b434ffbef21141d67 not found: ID does not exist" containerID="90c67448afc6ddacbc18ae66f889c063358de7b1edbb779b434ffbef21141d67" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.382958 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90c67448afc6ddacbc18ae66f889c063358de7b1edbb779b434ffbef21141d67"} err="failed to get container status \"90c67448afc6ddacbc18ae66f889c063358de7b1edbb779b434ffbef21141d67\": rpc error: code = NotFound desc = could not find container \"90c67448afc6ddacbc18ae66f889c063358de7b1edbb779b434ffbef21141d67\": container with ID starting with 90c67448afc6ddacbc18ae66f889c063358de7b1edbb779b434ffbef21141d67 not found: ID does not exist" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.382972 4693 scope.go:117] "RemoveContainer" containerID="9b01d28e16b5f450f1d91f2a50ff5f6141636d4ba749f4d4907d65374bd84491" Dec 12 16:16:57 crc kubenswrapper[4693]: E1212 16:16:57.383199 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b01d28e16b5f450f1d91f2a50ff5f6141636d4ba749f4d4907d65374bd84491\": container with ID starting with 9b01d28e16b5f450f1d91f2a50ff5f6141636d4ba749f4d4907d65374bd84491 not found: ID does not exist" containerID="9b01d28e16b5f450f1d91f2a50ff5f6141636d4ba749f4d4907d65374bd84491" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.383214 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b01d28e16b5f450f1d91f2a50ff5f6141636d4ba749f4d4907d65374bd84491"} err="failed to get container status \"9b01d28e16b5f450f1d91f2a50ff5f6141636d4ba749f4d4907d65374bd84491\": rpc error: code = NotFound desc = could not find container \"9b01d28e16b5f450f1d91f2a50ff5f6141636d4ba749f4d4907d65374bd84491\": container with ID starting with 9b01d28e16b5f450f1d91f2a50ff5f6141636d4ba749f4d4907d65374bd84491 not found: ID does not exist" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.530286 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.554335 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.576359 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:16:57 crc kubenswrapper[4693]: E1212 16:16:57.577622 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3262795d-61c5-42aa-aed3-887cb1864d7f" containerName="ceilometer-central-agent" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.577656 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="3262795d-61c5-42aa-aed3-887cb1864d7f" containerName="ceilometer-central-agent" Dec 12 16:16:57 crc kubenswrapper[4693]: E1212 16:16:57.577672 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3262795d-61c5-42aa-aed3-887cb1864d7f" containerName="sg-core" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.577679 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="3262795d-61c5-42aa-aed3-887cb1864d7f" containerName="sg-core" Dec 12 16:16:57 crc kubenswrapper[4693]: E1212 16:16:57.577719 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3262795d-61c5-42aa-aed3-887cb1864d7f" containerName="ceilometer-notification-agent" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.577726 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="3262795d-61c5-42aa-aed3-887cb1864d7f" containerName="ceilometer-notification-agent" Dec 12 16:16:57 crc kubenswrapper[4693]: E1212 16:16:57.577749 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3262795d-61c5-42aa-aed3-887cb1864d7f" containerName="proxy-httpd" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.577757 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="3262795d-61c5-42aa-aed3-887cb1864d7f" containerName="proxy-httpd" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.577987 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="3262795d-61c5-42aa-aed3-887cb1864d7f" containerName="proxy-httpd" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.578011 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="3262795d-61c5-42aa-aed3-887cb1864d7f" containerName="ceilometer-central-agent" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.578031 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="3262795d-61c5-42aa-aed3-887cb1864d7f" containerName="sg-core" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.578039 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="3262795d-61c5-42aa-aed3-887cb1864d7f" containerName="ceilometer-notification-agent" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.581783 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.583734 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.585794 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.585956 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.592615 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.672794 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.672889 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln4pz\" (UniqueName: \"kubernetes.io/projected/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-kube-api-access-ln4pz\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.672913 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-scripts\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.672952 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-run-httpd\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.673137 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.673313 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.673438 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-log-httpd\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.673489 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-config-data\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.775456 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.775575 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln4pz\" (UniqueName: \"kubernetes.io/projected/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-kube-api-access-ln4pz\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.775593 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-scripts\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.775637 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-run-httpd\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.775697 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.775760 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.775830 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-log-httpd\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.775863 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-config-data\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.778121 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-run-httpd\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.778199 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-log-httpd\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.784567 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.784611 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.784785 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-scripts\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.785292 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.785453 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-config-data\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.801120 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln4pz\" (UniqueName: \"kubernetes.io/projected/3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40-kube-api-access-ln4pz\") pod \"ceilometer-0\" (UID: \"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40\") " pod="openstack/ceilometer-0" Dec 12 16:16:57 crc kubenswrapper[4693]: I1212 16:16:57.899966 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 16:16:58 crc kubenswrapper[4693]: I1212 16:16:58.502997 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 16:16:58 crc kubenswrapper[4693]: I1212 16:16:58.944640 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40","Type":"ContainerStarted","Data":"a0eb58e0ee54caeeb659e9fffaafadce8ce85f24a9215a0f56ce419a97d5b489"} Dec 12 16:16:59 crc kubenswrapper[4693]: I1212 16:16:59.383876 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3262795d-61c5-42aa-aed3-887cb1864d7f" path="/var/lib/kubelet/pods/3262795d-61c5-42aa-aed3-887cb1864d7f/volumes" Dec 12 16:17:00 crc kubenswrapper[4693]: I1212 16:17:00.085211 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="6fd6556d-68c5-4492-804c-bc3188ab39b7" containerName="rabbitmq" containerID="cri-o://d82e8161185290fbcb563b7b15a99b5b8f10e53b27ce24cf46414cb6c2f02d2f" gracePeriod=604796 Dec 12 16:17:00 crc kubenswrapper[4693]: I1212 16:17:00.276435 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="d45363e2-3684-4fc6-b322-d99e6e87d3fd" containerName="rabbitmq" containerID="cri-o://e6cbcaca1b97d6adb98104d1522fb120a42fa7091ecaa1bf3915235161f4125f" gracePeriod=604795 Dec 12 16:17:06 crc kubenswrapper[4693]: I1212 16:17:06.358280 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:17:06 crc kubenswrapper[4693]: E1212 16:17:06.359138 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:17:07 crc kubenswrapper[4693]: I1212 16:17:07.094800 4693 generic.go:334] "Generic (PLEG): container finished" podID="6fd6556d-68c5-4492-804c-bc3188ab39b7" containerID="d82e8161185290fbcb563b7b15a99b5b8f10e53b27ce24cf46414cb6c2f02d2f" exitCode=0 Dec 12 16:17:07 crc kubenswrapper[4693]: I1212 16:17:07.094860 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6fd6556d-68c5-4492-804c-bc3188ab39b7","Type":"ContainerDied","Data":"d82e8161185290fbcb563b7b15a99b5b8f10e53b27ce24cf46414cb6c2f02d2f"} Dec 12 16:17:07 crc kubenswrapper[4693]: I1212 16:17:07.100856 4693 generic.go:334] "Generic (PLEG): container finished" podID="d45363e2-3684-4fc6-b322-d99e6e87d3fd" containerID="e6cbcaca1b97d6adb98104d1522fb120a42fa7091ecaa1bf3915235161f4125f" exitCode=0 Dec 12 16:17:07 crc kubenswrapper[4693]: I1212 16:17:07.100907 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"d45363e2-3684-4fc6-b322-d99e6e87d3fd","Type":"ContainerDied","Data":"e6cbcaca1b97d6adb98104d1522fb120a42fa7091ecaa1bf3915235161f4125f"} Dec 12 16:17:07 crc kubenswrapper[4693]: I1212 16:17:07.508082 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="d45363e2-3684-4fc6-b322-d99e6e87d3fd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Dec 12 16:17:07 crc kubenswrapper[4693]: I1212 16:17:07.864933 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="6fd6556d-68c5-4492-804c-bc3188ab39b7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.147663 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-w2qkq"] Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.150363 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.154911 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.171959 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-w2qkq"] Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.253806 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-w2qkq\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.253883 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b75k\" (UniqueName: \"kubernetes.io/projected/8103b038-eb16-4430-b466-0a02981b4e4a-kube-api-access-6b75k\") pod \"dnsmasq-dns-7d84b4d45c-w2qkq\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.253909 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-w2qkq\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.253930 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-w2qkq\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.253994 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-w2qkq\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.254061 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-w2qkq\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.254132 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-config\") pod \"dnsmasq-dns-7d84b4d45c-w2qkq\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.355821 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b75k\" (UniqueName: \"kubernetes.io/projected/8103b038-eb16-4430-b466-0a02981b4e4a-kube-api-access-6b75k\") pod \"dnsmasq-dns-7d84b4d45c-w2qkq\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.355867 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-w2qkq\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.355908 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-w2qkq\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.355991 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-w2qkq\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.356154 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-w2qkq\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.356326 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-config\") pod \"dnsmasq-dns-7d84b4d45c-w2qkq\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.356428 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-w2qkq\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.359557 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-w2qkq\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.363091 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-w2qkq\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.364091 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-w2qkq\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.364074 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-config\") pod \"dnsmasq-dns-7d84b4d45c-w2qkq\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.364343 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-w2qkq\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.393311 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-w2qkq\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.420077 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b75k\" (UniqueName: \"kubernetes.io/projected/8103b038-eb16-4430-b466-0a02981b4e4a-kube-api-access-6b75k\") pod \"dnsmasq-dns-7d84b4d45c-w2qkq\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:10 crc kubenswrapper[4693]: I1212 16:17:10.476053 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.920378 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.936155 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.971909 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d45363e2-3684-4fc6-b322-d99e6e87d3fd-pod-info\") pod \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.971976 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tjgz\" (UniqueName: \"kubernetes.io/projected/6fd6556d-68c5-4492-804c-bc3188ab39b7-kube-api-access-7tjgz\") pod \"6fd6556d-68c5-4492-804c-bc3188ab39b7\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.972061 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d45363e2-3684-4fc6-b322-d99e6e87d3fd-erlang-cookie-secret\") pod \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.972093 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-tls\") pod \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.972127 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6fd6556d-68c5-4492-804c-bc3188ab39b7-erlang-cookie-secret\") pod \"6fd6556d-68c5-4492-804c-bc3188ab39b7\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.972169 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-erlang-cookie\") pod \"6fd6556d-68c5-4492-804c-bc3188ab39b7\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.972201 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qqft\" (UniqueName: \"kubernetes.io/projected/d45363e2-3684-4fc6-b322-d99e6e87d3fd-kube-api-access-8qqft\") pod \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.972229 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-confd\") pod \"6fd6556d-68c5-4492-804c-bc3188ab39b7\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.972248 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d45363e2-3684-4fc6-b322-d99e6e87d3fd-plugins-conf\") pod \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.972262 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fd6556d-68c5-4492-804c-bc3188ab39b7-config-data\") pod \"6fd6556d-68c5-4492-804c-bc3188ab39b7\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.972690 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a732b9b8-db5d-4a31-98e5-2cac61707982\") pod \"6fd6556d-68c5-4492-804c-bc3188ab39b7\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.973298 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-plugins\") pod \"6fd6556d-68c5-4492-804c-bc3188ab39b7\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.973328 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6fd6556d-68c5-4492-804c-bc3188ab39b7-plugins-conf\") pod \"6fd6556d-68c5-4492-804c-bc3188ab39b7\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.973357 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-plugins\") pod \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.973398 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6fd6556d-68c5-4492-804c-bc3188ab39b7-pod-info\") pod \"6fd6556d-68c5-4492-804c-bc3188ab39b7\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.973661 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74\") pod \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.973751 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6fd6556d-68c5-4492-804c-bc3188ab39b7-server-conf\") pod \"6fd6556d-68c5-4492-804c-bc3188ab39b7\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.973755 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "6fd6556d-68c5-4492-804c-bc3188ab39b7" (UID: "6fd6556d-68c5-4492-804c-bc3188ab39b7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.973809 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-erlang-cookie\") pod \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.973846 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d45363e2-3684-4fc6-b322-d99e6e87d3fd-server-conf\") pod \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.973907 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-tls\") pod \"6fd6556d-68c5-4492-804c-bc3188ab39b7\" (UID: \"6fd6556d-68c5-4492-804c-bc3188ab39b7\") " Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.973931 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-confd\") pod \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.973953 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d45363e2-3684-4fc6-b322-d99e6e87d3fd-config-data\") pod \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\" (UID: \"d45363e2-3684-4fc6-b322-d99e6e87d3fd\") " Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.974782 4693 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.976706 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d45363e2-3684-4fc6-b322-d99e6e87d3fd" (UID: "d45363e2-3684-4fc6-b322-d99e6e87d3fd"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.976907 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d45363e2-3684-4fc6-b322-d99e6e87d3fd" (UID: "d45363e2-3684-4fc6-b322-d99e6e87d3fd"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.978076 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd6556d-68c5-4492-804c-bc3188ab39b7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "6fd6556d-68c5-4492-804c-bc3188ab39b7" (UID: "6fd6556d-68c5-4492-804c-bc3188ab39b7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.978821 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "6fd6556d-68c5-4492-804c-bc3188ab39b7" (UID: "6fd6556d-68c5-4492-804c-bc3188ab39b7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.986775 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/6fd6556d-68c5-4492-804c-bc3188ab39b7-pod-info" (OuterVolumeSpecName: "pod-info") pod "6fd6556d-68c5-4492-804c-bc3188ab39b7" (UID: "6fd6556d-68c5-4492-804c-bc3188ab39b7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.994592 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45363e2-3684-4fc6-b322-d99e6e87d3fd-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d45363e2-3684-4fc6-b322-d99e6e87d3fd" (UID: "d45363e2-3684-4fc6-b322-d99e6e87d3fd"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:17:14 crc kubenswrapper[4693]: I1212 16:17:14.998244 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "6fd6556d-68c5-4492-804c-bc3188ab39b7" (UID: "6fd6556d-68c5-4492-804c-bc3188ab39b7"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.007512 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd6556d-68c5-4492-804c-bc3188ab39b7-kube-api-access-7tjgz" (OuterVolumeSpecName: "kube-api-access-7tjgz") pod "6fd6556d-68c5-4492-804c-bc3188ab39b7" (UID: "6fd6556d-68c5-4492-804c-bc3188ab39b7"). InnerVolumeSpecName "kube-api-access-7tjgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.007583 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45363e2-3684-4fc6-b322-d99e6e87d3fd-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d45363e2-3684-4fc6-b322-d99e6e87d3fd" (UID: "d45363e2-3684-4fc6-b322-d99e6e87d3fd"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.007746 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd6556d-68c5-4492-804c-bc3188ab39b7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "6fd6556d-68c5-4492-804c-bc3188ab39b7" (UID: "6fd6556d-68c5-4492-804c-bc3188ab39b7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.007911 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45363e2-3684-4fc6-b322-d99e6e87d3fd-kube-api-access-8qqft" (OuterVolumeSpecName: "kube-api-access-8qqft") pod "d45363e2-3684-4fc6-b322-d99e6e87d3fd" (UID: "d45363e2-3684-4fc6-b322-d99e6e87d3fd"). InnerVolumeSpecName "kube-api-access-8qqft". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.013255 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d45363e2-3684-4fc6-b322-d99e6e87d3fd" (UID: "d45363e2-3684-4fc6-b322-d99e6e87d3fd"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.027976 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d45363e2-3684-4fc6-b322-d99e6e87d3fd-pod-info" (OuterVolumeSpecName: "pod-info") pod "d45363e2-3684-4fc6-b322-d99e6e87d3fd" (UID: "d45363e2-3684-4fc6-b322-d99e6e87d3fd"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.041761 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd6556d-68c5-4492-804c-bc3188ab39b7-config-data" (OuterVolumeSpecName: "config-data") pod "6fd6556d-68c5-4492-804c-bc3188ab39b7" (UID: "6fd6556d-68c5-4492-804c-bc3188ab39b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.066857 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74" (OuterVolumeSpecName: "persistence") pod "d45363e2-3684-4fc6-b322-d99e6e87d3fd" (UID: "d45363e2-3684-4fc6-b322-d99e6e87d3fd"). InnerVolumeSpecName "pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.078766 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a732b9b8-db5d-4a31-98e5-2cac61707982" (OuterVolumeSpecName: "persistence") pod "6fd6556d-68c5-4492-804c-bc3188ab39b7" (UID: "6fd6556d-68c5-4492-804c-bc3188ab39b7"). InnerVolumeSpecName "pvc-a732b9b8-db5d-4a31-98e5-2cac61707982". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.080754 4693 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.080794 4693 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d45363e2-3684-4fc6-b322-d99e6e87d3fd-pod-info\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.080807 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tjgz\" (UniqueName: \"kubernetes.io/projected/6fd6556d-68c5-4492-804c-bc3188ab39b7-kube-api-access-7tjgz\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.080822 4693 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d45363e2-3684-4fc6-b322-d99e6e87d3fd-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.080833 4693 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.080843 4693 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6fd6556d-68c5-4492-804c-bc3188ab39b7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.080854 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qqft\" (UniqueName: \"kubernetes.io/projected/d45363e2-3684-4fc6-b322-d99e6e87d3fd-kube-api-access-8qqft\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.080866 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fd6556d-68c5-4492-804c-bc3188ab39b7-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.080876 4693 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d45363e2-3684-4fc6-b322-d99e6e87d3fd-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.080912 4693 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a732b9b8-db5d-4a31-98e5-2cac61707982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a732b9b8-db5d-4a31-98e5-2cac61707982\") on node \"crc\" " Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.080926 4693 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.080941 4693 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6fd6556d-68c5-4492-804c-bc3188ab39b7-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.080952 4693 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.080962 4693 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6fd6556d-68c5-4492-804c-bc3188ab39b7-pod-info\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.080982 4693 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74\") on node \"crc\" " Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.080994 4693 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.087541 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45363e2-3684-4fc6-b322-d99e6e87d3fd-server-conf" (OuterVolumeSpecName: "server-conf") pod "d45363e2-3684-4fc6-b322-d99e6e87d3fd" (UID: "d45363e2-3684-4fc6-b322-d99e6e87d3fd"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.116564 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45363e2-3684-4fc6-b322-d99e6e87d3fd-config-data" (OuterVolumeSpecName: "config-data") pod "d45363e2-3684-4fc6-b322-d99e6e87d3fd" (UID: "d45363e2-3684-4fc6-b322-d99e6e87d3fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.166213 4693 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.166427 4693 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74") on node "crc" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.171732 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd6556d-68c5-4492-804c-bc3188ab39b7-server-conf" (OuterVolumeSpecName: "server-conf") pod "6fd6556d-68c5-4492-804c-bc3188ab39b7" (UID: "6fd6556d-68c5-4492-804c-bc3188ab39b7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.183560 4693 reconciler_common.go:293] "Volume detached for volume \"pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.183607 4693 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6fd6556d-68c5-4492-804c-bc3188ab39b7-server-conf\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.183623 4693 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d45363e2-3684-4fc6-b322-d99e6e87d3fd-server-conf\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.183634 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d45363e2-3684-4fc6-b322-d99e6e87d3fd-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.197757 4693 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.198107 4693 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a732b9b8-db5d-4a31-98e5-2cac61707982" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a732b9b8-db5d-4a31-98e5-2cac61707982") on node "crc" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.209856 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"d45363e2-3684-4fc6-b322-d99e6e87d3fd","Type":"ContainerDied","Data":"9186aa32cbfd0a90bdca08a6a278274366680a126a12d85a1784b15934429b6a"} Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.209895 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.209916 4693 scope.go:117] "RemoveContainer" containerID="e6cbcaca1b97d6adb98104d1522fb120a42fa7091ecaa1bf3915235161f4125f" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.214548 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6fd6556d-68c5-4492-804c-bc3188ab39b7","Type":"ContainerDied","Data":"ce5a06b4ffee453d276eda4625c2a45632134402fadc573ef2ed9fc1f895544e"} Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.214641 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.245014 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "6fd6556d-68c5-4492-804c-bc3188ab39b7" (UID: "6fd6556d-68c5-4492-804c-bc3188ab39b7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.273266 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d45363e2-3684-4fc6-b322-d99e6e87d3fd" (UID: "d45363e2-3684-4fc6-b322-d99e6e87d3fd"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.284989 4693 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d45363e2-3684-4fc6-b322-d99e6e87d3fd-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.285031 4693 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6fd6556d-68c5-4492-804c-bc3188ab39b7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.285044 4693 reconciler_common.go:293] "Volume detached for volume \"pvc-a732b9b8-db5d-4a31-98e5-2cac61707982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a732b9b8-db5d-4a31-98e5-2cac61707982\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.542941 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.592049 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.619399 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.633919 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Dec 12 16:17:15 crc kubenswrapper[4693]: E1212 16:17:15.634407 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d45363e2-3684-4fc6-b322-d99e6e87d3fd" containerName="setup-container" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.634418 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="d45363e2-3684-4fc6-b322-d99e6e87d3fd" containerName="setup-container" Dec 12 16:17:15 crc kubenswrapper[4693]: E1212 16:17:15.634436 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd6556d-68c5-4492-804c-bc3188ab39b7" containerName="setup-container" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.634442 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd6556d-68c5-4492-804c-bc3188ab39b7" containerName="setup-container" Dec 12 16:17:15 crc kubenswrapper[4693]: E1212 16:17:15.634458 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd6556d-68c5-4492-804c-bc3188ab39b7" containerName="rabbitmq" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.634464 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd6556d-68c5-4492-804c-bc3188ab39b7" containerName="rabbitmq" Dec 12 16:17:15 crc kubenswrapper[4693]: E1212 16:17:15.634502 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d45363e2-3684-4fc6-b322-d99e6e87d3fd" containerName="rabbitmq" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.634508 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="d45363e2-3684-4fc6-b322-d99e6e87d3fd" containerName="rabbitmq" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.634722 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="d45363e2-3684-4fc6-b322-d99e6e87d3fd" containerName="rabbitmq" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.634746 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd6556d-68c5-4492-804c-bc3188ab39b7" containerName="rabbitmq" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.636007 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.646233 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.654320 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.671754 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.674440 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.674556 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.681187 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.681400 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.681553 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2s7vz" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.681661 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.681847 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.681974 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.682094 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.808704 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.808816 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5829509e-ca60-403e-8444-81ebb63d2df5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.808897 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5829509e-ca60-403e-8444-81ebb63d2df5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.808936 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-config-data\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.808962 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.809028 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5829509e-ca60-403e-8444-81ebb63d2df5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.809118 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdhhn\" (UniqueName: \"kubernetes.io/projected/5829509e-ca60-403e-8444-81ebb63d2df5-kube-api-access-sdhhn\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.809163 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5829509e-ca60-403e-8444-81ebb63d2df5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.809192 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a732b9b8-db5d-4a31-98e5-2cac61707982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a732b9b8-db5d-4a31-98e5-2cac61707982\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.809243 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5829509e-ca60-403e-8444-81ebb63d2df5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.809515 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5829509e-ca60-403e-8444-81ebb63d2df5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.810844 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-pod-info\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.811007 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rzxb\" (UniqueName: \"kubernetes.io/projected/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-kube-api-access-4rzxb\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.811051 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.811247 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5829509e-ca60-403e-8444-81ebb63d2df5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.811328 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-server-conf\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.811433 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.811549 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5829509e-ca60-403e-8444-81ebb63d2df5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.811602 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.811677 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.811700 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.811727 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5829509e-ca60-403e-8444-81ebb63d2df5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.916317 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5829509e-ca60-403e-8444-81ebb63d2df5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.916381 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a732b9b8-db5d-4a31-98e5-2cac61707982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a732b9b8-db5d-4a31-98e5-2cac61707982\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.916413 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5829509e-ca60-403e-8444-81ebb63d2df5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.916473 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5829509e-ca60-403e-8444-81ebb63d2df5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.916497 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-pod-info\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.916549 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rzxb\" (UniqueName: \"kubernetes.io/projected/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-kube-api-access-4rzxb\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.916569 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.916591 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5829509e-ca60-403e-8444-81ebb63d2df5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.916613 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-server-conf\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.916837 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.916890 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5829509e-ca60-403e-8444-81ebb63d2df5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.916932 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.916961 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.916975 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.917004 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5829509e-ca60-403e-8444-81ebb63d2df5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.917046 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.917066 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5829509e-ca60-403e-8444-81ebb63d2df5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.917087 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5829509e-ca60-403e-8444-81ebb63d2df5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.917107 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-config-data\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.917129 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.917180 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5829509e-ca60-403e-8444-81ebb63d2df5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.917214 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdhhn\" (UniqueName: \"kubernetes.io/projected/5829509e-ca60-403e-8444-81ebb63d2df5-kube-api-access-sdhhn\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.918153 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5829509e-ca60-403e-8444-81ebb63d2df5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.918382 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5829509e-ca60-403e-8444-81ebb63d2df5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.919707 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.920157 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-config-data\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.921995 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.922297 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5829509e-ca60-403e-8444-81ebb63d2df5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.922344 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.922566 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5829509e-ca60-403e-8444-81ebb63d2df5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.924167 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.924207 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5730403f2a641c2d36c77eefd13e83fe4cbfb23ef325a0e0333962c172190de6/globalmount\"" pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.924190 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.924264 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5829509e-ca60-403e-8444-81ebb63d2df5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.924304 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a732b9b8-db5d-4a31-98e5-2cac61707982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a732b9b8-db5d-4a31-98e5-2cac61707982\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4cf1f32a83213719dc85bfec44caf9d44a7960a8ddbf15eb937fd4eb898307df/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.924882 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-pod-info\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.926519 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-server-conf\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.934052 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5829509e-ca60-403e-8444-81ebb63d2df5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.934674 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5829509e-ca60-403e-8444-81ebb63d2df5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.935025 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5829509e-ca60-403e-8444-81ebb63d2df5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.935157 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.935865 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.936132 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5829509e-ca60-403e-8444-81ebb63d2df5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.936695 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rzxb\" (UniqueName: \"kubernetes.io/projected/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-kube-api-access-4rzxb\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.938117 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:15 crc kubenswrapper[4693]: I1212 16:17:15.953129 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdhhn\" (UniqueName: \"kubernetes.io/projected/5829509e-ca60-403e-8444-81ebb63d2df5-kube-api-access-sdhhn\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:16 crc kubenswrapper[4693]: I1212 16:17:16.007327 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a732b9b8-db5d-4a31-98e5-2cac61707982\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a732b9b8-db5d-4a31-98e5-2cac61707982\") pod \"rabbitmq-cell1-server-0\" (UID: \"5829509e-ca60-403e-8444-81ebb63d2df5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:16 crc kubenswrapper[4693]: I1212 16:17:16.022927 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:17:16 crc kubenswrapper[4693]: I1212 16:17:16.058465 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4fe08c3-62c4-4900-a36d-b568a198cd74\") pod \"rabbitmq-server-2\" (UID: \"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed\") " pod="openstack/rabbitmq-server-2" Dec 12 16:17:16 crc kubenswrapper[4693]: I1212 16:17:16.308596 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Dec 12 16:17:17 crc kubenswrapper[4693]: I1212 16:17:17.369652 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fd6556d-68c5-4492-804c-bc3188ab39b7" path="/var/lib/kubelet/pods/6fd6556d-68c5-4492-804c-bc3188ab39b7/volumes" Dec 12 16:17:17 crc kubenswrapper[4693]: I1212 16:17:17.370874 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45363e2-3684-4fc6-b322-d99e6e87d3fd" path="/var/lib/kubelet/pods/d45363e2-3684-4fc6-b322-d99e6e87d3fd/volumes" Dec 12 16:17:19 crc kubenswrapper[4693]: I1212 16:17:19.360238 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:17:19 crc kubenswrapper[4693]: E1212 16:17:19.360815 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:17:21 crc kubenswrapper[4693]: E1212 16:17:21.977154 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 12 16:17:21 crc kubenswrapper[4693]: E1212 16:17:21.977519 4693 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 12 16:17:21 crc kubenswrapper[4693]: E1212 16:17:21.977637 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5tnqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-h5zgl_openstack(e457e88c-30e2-45af-8a1c-d3056402343b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:17:21 crc kubenswrapper[4693]: E1212 16:17:21.978788 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-h5zgl" podUID="e457e88c-30e2-45af-8a1c-d3056402343b" Dec 12 16:17:22 crc kubenswrapper[4693]: E1212 16:17:22.310598 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-h5zgl" podUID="e457e88c-30e2-45af-8a1c-d3056402343b" Dec 12 16:17:23 crc kubenswrapper[4693]: I1212 16:17:23.374591 4693 scope.go:117] "RemoveContainer" containerID="d9cff3e0e49928af617fc29c4e6db7c48889e3152f393792640f1c5d6d777637" Dec 12 16:17:23 crc kubenswrapper[4693]: E1212 16:17:23.768943 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 12 16:17:23 crc kubenswrapper[4693]: E1212 16:17:23.769012 4693 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 12 16:17:23 crc kubenswrapper[4693]: E1212 16:17:23.769119 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ncbh545h5b8h684h8fh85h79hd8h4h5bfh96h5cch65h5bfh89h78hfbhb9h5b6hdfh7ch97h656h5c6h5d4h589h6dh694h5f7hffh5f7hb6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ln4pz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 16:17:23 crc kubenswrapper[4693]: I1212 16:17:23.833549 4693 scope.go:117] "RemoveContainer" containerID="d82e8161185290fbcb563b7b15a99b5b8f10e53b27ce24cf46414cb6c2f02d2f" Dec 12 16:17:24 crc kubenswrapper[4693]: I1212 16:17:24.004343 4693 scope.go:117] "RemoveContainer" containerID="3be89743cd21d75b5646c0e66954be38785af6de6dae899ed1446b1366deecb8" Dec 12 16:17:24 crc kubenswrapper[4693]: I1212 16:17:24.268564 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-w2qkq"] Dec 12 16:17:24 crc kubenswrapper[4693]: W1212 16:17:24.269044 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8103b038_eb16_4430_b466_0a02981b4e4a.slice/crio-00e273df99b700c511084ae6bc8542ddbda594f006141c8d21f5a1149b8e8921 WatchSource:0}: Error finding container 00e273df99b700c511084ae6bc8542ddbda594f006141c8d21f5a1149b8e8921: Status 404 returned error can't find the container with id 00e273df99b700c511084ae6bc8542ddbda594f006141c8d21f5a1149b8e8921 Dec 12 16:17:24 crc kubenswrapper[4693]: I1212 16:17:24.334903 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" event={"ID":"8103b038-eb16-4430-b466-0a02981b4e4a","Type":"ContainerStarted","Data":"00e273df99b700c511084ae6bc8542ddbda594f006141c8d21f5a1149b8e8921"} Dec 12 16:17:24 crc kubenswrapper[4693]: I1212 16:17:24.396336 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Dec 12 16:17:24 crc kubenswrapper[4693]: I1212 16:17:24.415616 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 12 16:17:25 crc kubenswrapper[4693]: I1212 16:17:25.364615 4693 generic.go:334] "Generic (PLEG): container finished" podID="8103b038-eb16-4430-b466-0a02981b4e4a" containerID="fe600ecae669a66aae1fd1d83c1cf5b55549d85a6c5d11f28304062f472364a0" exitCode=0 Dec 12 16:17:25 crc kubenswrapper[4693]: I1212 16:17:25.387233 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5829509e-ca60-403e-8444-81ebb63d2df5","Type":"ContainerStarted","Data":"7d43a84d6c1942c29a7ae5aac46b923581bdae1beb2cd4b2e94d156ec74a3481"} Dec 12 16:17:25 crc kubenswrapper[4693]: I1212 16:17:25.387293 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40","Type":"ContainerStarted","Data":"7e77ea5d5974ac97c12353fd3cda8de1897902e50c2133d976eeeca590c3df72"} Dec 12 16:17:25 crc kubenswrapper[4693]: I1212 16:17:25.387308 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed","Type":"ContainerStarted","Data":"2632535879d336ceabc926f866bff638199ee4a0890a3e4a4b23ed1face99232"} Dec 12 16:17:25 crc kubenswrapper[4693]: I1212 16:17:25.387322 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" event={"ID":"8103b038-eb16-4430-b466-0a02981b4e4a","Type":"ContainerDied","Data":"fe600ecae669a66aae1fd1d83c1cf5b55549d85a6c5d11f28304062f472364a0"} Dec 12 16:17:26 crc kubenswrapper[4693]: I1212 16:17:26.379003 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" event={"ID":"8103b038-eb16-4430-b466-0a02981b4e4a","Type":"ContainerStarted","Data":"32f5dc79efa9a41ff3d318ba5d14411ba8eec42e061818c56c64beb2efbf3af5"} Dec 12 16:17:26 crc kubenswrapper[4693]: I1212 16:17:26.379633 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:26 crc kubenswrapper[4693]: I1212 16:17:26.382775 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40","Type":"ContainerStarted","Data":"4a7b0d2c39c5ce5da29990d04391a2fc4e51a3764c221ef2ded7513365d895f0"} Dec 12 16:17:26 crc kubenswrapper[4693]: I1212 16:17:26.423891 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" podStartSLOduration=16.423866836 podStartE2EDuration="16.423866836s" podCreationTimestamp="2025-12-12 16:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:26.406850581 +0000 UTC m=+1873.575490232" watchObservedRunningTime="2025-12-12 16:17:26.423866836 +0000 UTC m=+1873.592506447" Dec 12 16:17:27 crc kubenswrapper[4693]: I1212 16:17:27.409341 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed","Type":"ContainerStarted","Data":"a333cdbb65eb3f265d5a9407919166d465208e4c44618b22e78bb3a22a585fa3"} Dec 12 16:17:27 crc kubenswrapper[4693]: I1212 16:17:27.410976 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5829509e-ca60-403e-8444-81ebb63d2df5","Type":"ContainerStarted","Data":"172640f12dce0872f7626ac0632d94cd7729895b12755eb2ffc276c61082df07"} Dec 12 16:17:27 crc kubenswrapper[4693]: E1212 16:17:27.924989 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40" Dec 12 16:17:28 crc kubenswrapper[4693]: I1212 16:17:28.429721 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40","Type":"ContainerStarted","Data":"fbd699453ca6fc4ae6b25975df758889a3bee9c37fe1ad9d3b004980732e3f2c"} Dec 12 16:17:28 crc kubenswrapper[4693]: E1212 16:17:28.432260 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40" Dec 12 16:17:29 crc kubenswrapper[4693]: I1212 16:17:29.448378 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 12 16:17:29 crc kubenswrapper[4693]: E1212 16:17:29.452065 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40" Dec 12 16:17:30 crc kubenswrapper[4693]: E1212 16:17:30.455699 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40" Dec 12 16:17:30 crc kubenswrapper[4693]: I1212 16:17:30.478262 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:30 crc kubenswrapper[4693]: I1212 16:17:30.587745 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-lmn86"] Dec 12 16:17:30 crc kubenswrapper[4693]: I1212 16:17:30.588050 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" podUID="199159e4-5fda-4c35-a5f3-c1d84e68b9bc" containerName="dnsmasq-dns" containerID="cri-o://593e4afc2d92a69577e3dbda2c5d309968db70a1d7ea9f7a8fc0fef7793616ba" gracePeriod=10 Dec 12 16:17:30 crc kubenswrapper[4693]: I1212 16:17:30.850969 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-6pdh7"] Dec 12 16:17:30 crc kubenswrapper[4693]: I1212 16:17:30.853240 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:30 crc kubenswrapper[4693]: I1212 16:17:30.881436 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-6pdh7"] Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.008341 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/108692e6-5993-4e9c-9286-e8ffa28a2c5b-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-6pdh7\" (UID: \"108692e6-5993-4e9c-9286-e8ffa28a2c5b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.008411 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/108692e6-5993-4e9c-9286-e8ffa28a2c5b-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-6pdh7\" (UID: \"108692e6-5993-4e9c-9286-e8ffa28a2c5b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.008438 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/108692e6-5993-4e9c-9286-e8ffa28a2c5b-config\") pod \"dnsmasq-dns-6f6df4f56c-6pdh7\" (UID: \"108692e6-5993-4e9c-9286-e8ffa28a2c5b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.008459 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/108692e6-5993-4e9c-9286-e8ffa28a2c5b-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-6pdh7\" (UID: \"108692e6-5993-4e9c-9286-e8ffa28a2c5b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.008608 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w297z\" (UniqueName: \"kubernetes.io/projected/108692e6-5993-4e9c-9286-e8ffa28a2c5b-kube-api-access-w297z\") pod \"dnsmasq-dns-6f6df4f56c-6pdh7\" (UID: \"108692e6-5993-4e9c-9286-e8ffa28a2c5b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.008820 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/108692e6-5993-4e9c-9286-e8ffa28a2c5b-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-6pdh7\" (UID: \"108692e6-5993-4e9c-9286-e8ffa28a2c5b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.009091 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/108692e6-5993-4e9c-9286-e8ffa28a2c5b-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-6pdh7\" (UID: \"108692e6-5993-4e9c-9286-e8ffa28a2c5b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.112523 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/108692e6-5993-4e9c-9286-e8ffa28a2c5b-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-6pdh7\" (UID: \"108692e6-5993-4e9c-9286-e8ffa28a2c5b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.112628 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/108692e6-5993-4e9c-9286-e8ffa28a2c5b-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-6pdh7\" (UID: \"108692e6-5993-4e9c-9286-e8ffa28a2c5b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.112735 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/108692e6-5993-4e9c-9286-e8ffa28a2c5b-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-6pdh7\" (UID: \"108692e6-5993-4e9c-9286-e8ffa28a2c5b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.112775 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/108692e6-5993-4e9c-9286-e8ffa28a2c5b-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-6pdh7\" (UID: \"108692e6-5993-4e9c-9286-e8ffa28a2c5b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.112805 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/108692e6-5993-4e9c-9286-e8ffa28a2c5b-config\") pod \"dnsmasq-dns-6f6df4f56c-6pdh7\" (UID: \"108692e6-5993-4e9c-9286-e8ffa28a2c5b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.112822 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/108692e6-5993-4e9c-9286-e8ffa28a2c5b-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-6pdh7\" (UID: \"108692e6-5993-4e9c-9286-e8ffa28a2c5b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.112912 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w297z\" (UniqueName: \"kubernetes.io/projected/108692e6-5993-4e9c-9286-e8ffa28a2c5b-kube-api-access-w297z\") pod \"dnsmasq-dns-6f6df4f56c-6pdh7\" (UID: \"108692e6-5993-4e9c-9286-e8ffa28a2c5b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.113606 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/108692e6-5993-4e9c-9286-e8ffa28a2c5b-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-6pdh7\" (UID: \"108692e6-5993-4e9c-9286-e8ffa28a2c5b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.121157 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/108692e6-5993-4e9c-9286-e8ffa28a2c5b-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-6pdh7\" (UID: \"108692e6-5993-4e9c-9286-e8ffa28a2c5b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.121776 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/108692e6-5993-4e9c-9286-e8ffa28a2c5b-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-6pdh7\" (UID: \"108692e6-5993-4e9c-9286-e8ffa28a2c5b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.122113 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/108692e6-5993-4e9c-9286-e8ffa28a2c5b-config\") pod \"dnsmasq-dns-6f6df4f56c-6pdh7\" (UID: \"108692e6-5993-4e9c-9286-e8ffa28a2c5b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.122177 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/108692e6-5993-4e9c-9286-e8ffa28a2c5b-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-6pdh7\" (UID: \"108692e6-5993-4e9c-9286-e8ffa28a2c5b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.122181 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/108692e6-5993-4e9c-9286-e8ffa28a2c5b-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-6pdh7\" (UID: \"108692e6-5993-4e9c-9286-e8ffa28a2c5b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.137309 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w297z\" (UniqueName: \"kubernetes.io/projected/108692e6-5993-4e9c-9286-e8ffa28a2c5b-kube-api-access-w297z\") pod \"dnsmasq-dns-6f6df4f56c-6pdh7\" (UID: \"108692e6-5993-4e9c-9286-e8ffa28a2c5b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.180206 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.354387 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.362296 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:17:31 crc kubenswrapper[4693]: E1212 16:17:31.362588 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.480528 4693 generic.go:334] "Generic (PLEG): container finished" podID="199159e4-5fda-4c35-a5f3-c1d84e68b9bc" containerID="593e4afc2d92a69577e3dbda2c5d309968db70a1d7ea9f7a8fc0fef7793616ba" exitCode=0 Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.480721 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" event={"ID":"199159e4-5fda-4c35-a5f3-c1d84e68b9bc","Type":"ContainerDied","Data":"593e4afc2d92a69577e3dbda2c5d309968db70a1d7ea9f7a8fc0fef7793616ba"} Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.481743 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" event={"ID":"199159e4-5fda-4c35-a5f3-c1d84e68b9bc","Type":"ContainerDied","Data":"dc29deb68a92ec81d825f41d58a9e09d699e7da42517b59a6aa1e84656092f95"} Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.480894 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-lmn86" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.481814 4693 scope.go:117] "RemoveContainer" containerID="593e4afc2d92a69577e3dbda2c5d309968db70a1d7ea9f7a8fc0fef7793616ba" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.523262 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-config\") pod \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.523376 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2snd\" (UniqueName: \"kubernetes.io/projected/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-kube-api-access-f2snd\") pod \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.523431 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-ovsdbserver-sb\") pod \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.523501 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-ovsdbserver-nb\") pod \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.523634 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-dns-swift-storage-0\") pod \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.523722 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-dns-svc\") pod \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.544995 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-kube-api-access-f2snd" (OuterVolumeSpecName: "kube-api-access-f2snd") pod "199159e4-5fda-4c35-a5f3-c1d84e68b9bc" (UID: "199159e4-5fda-4c35-a5f3-c1d84e68b9bc"). InnerVolumeSpecName "kube-api-access-f2snd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.620873 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "199159e4-5fda-4c35-a5f3-c1d84e68b9bc" (UID: "199159e4-5fda-4c35-a5f3-c1d84e68b9bc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.622236 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-config" (OuterVolumeSpecName: "config") pod "199159e4-5fda-4c35-a5f3-c1d84e68b9bc" (UID: "199159e4-5fda-4c35-a5f3-c1d84e68b9bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.626742 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "199159e4-5fda-4c35-a5f3-c1d84e68b9bc" (UID: "199159e4-5fda-4c35-a5f3-c1d84e68b9bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.627233 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-dns-svc\") pod \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\" (UID: \"199159e4-5fda-4c35-a5f3-c1d84e68b9bc\") " Dec 12 16:17:31 crc kubenswrapper[4693]: W1212 16:17:31.627361 4693 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/199159e4-5fda-4c35-a5f3-c1d84e68b9bc/volumes/kubernetes.io~configmap/dns-svc Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.627386 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "199159e4-5fda-4c35-a5f3-c1d84e68b9bc" (UID: "199159e4-5fda-4c35-a5f3-c1d84e68b9bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.627809 4693 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.627823 4693 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.627835 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.627843 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2snd\" (UniqueName: \"kubernetes.io/projected/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-kube-api-access-f2snd\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.635343 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "199159e4-5fda-4c35-a5f3-c1d84e68b9bc" (UID: "199159e4-5fda-4c35-a5f3-c1d84e68b9bc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.635665 4693 scope.go:117] "RemoveContainer" containerID="4ccc35eb2d2f3108ebb8c2d8ec79dc50b82572e26b0735c6a2f79add86bed9c3" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.721146 4693 scope.go:117] "RemoveContainer" containerID="593e4afc2d92a69577e3dbda2c5d309968db70a1d7ea9f7a8fc0fef7793616ba" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.729777 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:31 crc kubenswrapper[4693]: E1212 16:17:31.735950 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"593e4afc2d92a69577e3dbda2c5d309968db70a1d7ea9f7a8fc0fef7793616ba\": container with ID starting with 593e4afc2d92a69577e3dbda2c5d309968db70a1d7ea9f7a8fc0fef7793616ba not found: ID does not exist" containerID="593e4afc2d92a69577e3dbda2c5d309968db70a1d7ea9f7a8fc0fef7793616ba" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.736013 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"593e4afc2d92a69577e3dbda2c5d309968db70a1d7ea9f7a8fc0fef7793616ba"} err="failed to get container status \"593e4afc2d92a69577e3dbda2c5d309968db70a1d7ea9f7a8fc0fef7793616ba\": rpc error: code = NotFound desc = could not find container \"593e4afc2d92a69577e3dbda2c5d309968db70a1d7ea9f7a8fc0fef7793616ba\": container with ID starting with 593e4afc2d92a69577e3dbda2c5d309968db70a1d7ea9f7a8fc0fef7793616ba not found: ID does not exist" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.736054 4693 scope.go:117] "RemoveContainer" containerID="4ccc35eb2d2f3108ebb8c2d8ec79dc50b82572e26b0735c6a2f79add86bed9c3" Dec 12 16:17:31 crc kubenswrapper[4693]: E1212 16:17:31.736554 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ccc35eb2d2f3108ebb8c2d8ec79dc50b82572e26b0735c6a2f79add86bed9c3\": container with ID starting with 4ccc35eb2d2f3108ebb8c2d8ec79dc50b82572e26b0735c6a2f79add86bed9c3 not found: ID does not exist" containerID="4ccc35eb2d2f3108ebb8c2d8ec79dc50b82572e26b0735c6a2f79add86bed9c3" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.736585 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ccc35eb2d2f3108ebb8c2d8ec79dc50b82572e26b0735c6a2f79add86bed9c3"} err="failed to get container status \"4ccc35eb2d2f3108ebb8c2d8ec79dc50b82572e26b0735c6a2f79add86bed9c3\": rpc error: code = NotFound desc = could not find container \"4ccc35eb2d2f3108ebb8c2d8ec79dc50b82572e26b0735c6a2f79add86bed9c3\": container with ID starting with 4ccc35eb2d2f3108ebb8c2d8ec79dc50b82572e26b0735c6a2f79add86bed9c3 not found: ID does not exist" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.737166 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "199159e4-5fda-4c35-a5f3-c1d84e68b9bc" (UID: "199159e4-5fda-4c35-a5f3-c1d84e68b9bc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.820943 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-lmn86"] Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.832244 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/199159e4-5fda-4c35-a5f3-c1d84e68b9bc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.834771 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-lmn86"] Dec 12 16:17:31 crc kubenswrapper[4693]: I1212 16:17:31.855788 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-6pdh7"] Dec 12 16:17:31 crc kubenswrapper[4693]: W1212 16:17:31.863934 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod108692e6_5993_4e9c_9286_e8ffa28a2c5b.slice/crio-2c1be105a7520838ee656bd9ad117347b0818ba1358e11f926a607bcfae72337 WatchSource:0}: Error finding container 2c1be105a7520838ee656bd9ad117347b0818ba1358e11f926a607bcfae72337: Status 404 returned error can't find the container with id 2c1be105a7520838ee656bd9ad117347b0818ba1358e11f926a607bcfae72337 Dec 12 16:17:32 crc kubenswrapper[4693]: I1212 16:17:32.495992 4693 generic.go:334] "Generic (PLEG): container finished" podID="108692e6-5993-4e9c-9286-e8ffa28a2c5b" containerID="980a0062688d3fa30652c9edeeaeab933b3e53041e3253aba9cd5bd54144770d" exitCode=0 Dec 12 16:17:32 crc kubenswrapper[4693]: I1212 16:17:32.496050 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" event={"ID":"108692e6-5993-4e9c-9286-e8ffa28a2c5b","Type":"ContainerDied","Data":"980a0062688d3fa30652c9edeeaeab933b3e53041e3253aba9cd5bd54144770d"} Dec 12 16:17:32 crc kubenswrapper[4693]: I1212 16:17:32.496339 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" event={"ID":"108692e6-5993-4e9c-9286-e8ffa28a2c5b","Type":"ContainerStarted","Data":"2c1be105a7520838ee656bd9ad117347b0818ba1358e11f926a607bcfae72337"} Dec 12 16:17:33 crc kubenswrapper[4693]: I1212 16:17:33.375165 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="199159e4-5fda-4c35-a5f3-c1d84e68b9bc" path="/var/lib/kubelet/pods/199159e4-5fda-4c35-a5f3-c1d84e68b9bc/volumes" Dec 12 16:17:33 crc kubenswrapper[4693]: I1212 16:17:33.512090 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" event={"ID":"108692e6-5993-4e9c-9286-e8ffa28a2c5b","Type":"ContainerStarted","Data":"13b7e3268545696df059f66946c169082fbae90e51c101dbb2438d8dcedded18"} Dec 12 16:17:33 crc kubenswrapper[4693]: I1212 16:17:33.512895 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:33 crc kubenswrapper[4693]: I1212 16:17:33.547131 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" podStartSLOduration=3.547105799 podStartE2EDuration="3.547105799s" podCreationTimestamp="2025-12-12 16:17:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:33.534170695 +0000 UTC m=+1880.702810296" watchObservedRunningTime="2025-12-12 16:17:33.547105799 +0000 UTC m=+1880.715745400" Dec 12 16:17:36 crc kubenswrapper[4693]: I1212 16:17:36.554752 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-h5zgl" event={"ID":"e457e88c-30e2-45af-8a1c-d3056402343b","Type":"ContainerStarted","Data":"bf5ca634fda2b51cb09dee2780442f1cbc4a786b9e3bbca86aa5aa70383f169d"} Dec 12 16:17:36 crc kubenswrapper[4693]: I1212 16:17:36.596262 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-h5zgl" podStartSLOduration=2.438655277 podStartE2EDuration="44.596208291s" podCreationTimestamp="2025-12-12 16:16:52 +0000 UTC" firstStartedPulling="2025-12-12 16:16:53.392954014 +0000 UTC m=+1840.561593615" lastFinishedPulling="2025-12-12 16:17:35.550507028 +0000 UTC m=+1882.719146629" observedRunningTime="2025-12-12 16:17:36.586099644 +0000 UTC m=+1883.754739305" watchObservedRunningTime="2025-12-12 16:17:36.596208291 +0000 UTC m=+1883.764847912" Dec 12 16:17:39 crc kubenswrapper[4693]: I1212 16:17:39.597680 4693 generic.go:334] "Generic (PLEG): container finished" podID="e457e88c-30e2-45af-8a1c-d3056402343b" containerID="bf5ca634fda2b51cb09dee2780442f1cbc4a786b9e3bbca86aa5aa70383f169d" exitCode=0 Dec 12 16:17:39 crc kubenswrapper[4693]: I1212 16:17:39.597822 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-h5zgl" event={"ID":"e457e88c-30e2-45af-8a1c-d3056402343b","Type":"ContainerDied","Data":"bf5ca634fda2b51cb09dee2780442f1cbc4a786b9e3bbca86aa5aa70383f169d"} Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.184450 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-6pdh7" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.270942 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-w2qkq"] Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.271825 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" podUID="8103b038-eb16-4430-b466-0a02981b4e4a" containerName="dnsmasq-dns" containerID="cri-o://32f5dc79efa9a41ff3d318ba5d14411ba8eec42e061818c56c64beb2efbf3af5" gracePeriod=10 Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.302171 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-h5zgl" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.374707 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e457e88c-30e2-45af-8a1c-d3056402343b-config-data\") pod \"e457e88c-30e2-45af-8a1c-d3056402343b\" (UID: \"e457e88c-30e2-45af-8a1c-d3056402343b\") " Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.374769 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tnqt\" (UniqueName: \"kubernetes.io/projected/e457e88c-30e2-45af-8a1c-d3056402343b-kube-api-access-5tnqt\") pod \"e457e88c-30e2-45af-8a1c-d3056402343b\" (UID: \"e457e88c-30e2-45af-8a1c-d3056402343b\") " Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.374815 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e457e88c-30e2-45af-8a1c-d3056402343b-combined-ca-bundle\") pod \"e457e88c-30e2-45af-8a1c-d3056402343b\" (UID: \"e457e88c-30e2-45af-8a1c-d3056402343b\") " Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.380910 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e457e88c-30e2-45af-8a1c-d3056402343b-kube-api-access-5tnqt" (OuterVolumeSpecName: "kube-api-access-5tnqt") pod "e457e88c-30e2-45af-8a1c-d3056402343b" (UID: "e457e88c-30e2-45af-8a1c-d3056402343b"). InnerVolumeSpecName "kube-api-access-5tnqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.424479 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e457e88c-30e2-45af-8a1c-d3056402343b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e457e88c-30e2-45af-8a1c-d3056402343b" (UID: "e457e88c-30e2-45af-8a1c-d3056402343b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.477975 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tnqt\" (UniqueName: \"kubernetes.io/projected/e457e88c-30e2-45af-8a1c-d3056402343b-kube-api-access-5tnqt\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.478017 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e457e88c-30e2-45af-8a1c-d3056402343b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.558954 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e457e88c-30e2-45af-8a1c-d3056402343b-config-data" (OuterVolumeSpecName: "config-data") pod "e457e88c-30e2-45af-8a1c-d3056402343b" (UID: "e457e88c-30e2-45af-8a1c-d3056402343b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.580062 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e457e88c-30e2-45af-8a1c-d3056402343b-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.623424 4693 generic.go:334] "Generic (PLEG): container finished" podID="8103b038-eb16-4430-b466-0a02981b4e4a" containerID="32f5dc79efa9a41ff3d318ba5d14411ba8eec42e061818c56c64beb2efbf3af5" exitCode=0 Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.623495 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" event={"ID":"8103b038-eb16-4430-b466-0a02981b4e4a","Type":"ContainerDied","Data":"32f5dc79efa9a41ff3d318ba5d14411ba8eec42e061818c56c64beb2efbf3af5"} Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.625857 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-h5zgl" event={"ID":"e457e88c-30e2-45af-8a1c-d3056402343b","Type":"ContainerDied","Data":"8c42ea8ebdd5883b1de6cb0ff3452a05924b273ec45f95d69796796b07ddb1ff"} Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.625940 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c42ea8ebdd5883b1de6cb0ff3452a05924b273ec45f95d69796796b07ddb1ff" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.626035 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-h5zgl" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.777864 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.785973 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-config\") pod \"8103b038-eb16-4430-b466-0a02981b4e4a\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.786020 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-openstack-edpm-ipam\") pod \"8103b038-eb16-4430-b466-0a02981b4e4a\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.786067 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-ovsdbserver-sb\") pod \"8103b038-eb16-4430-b466-0a02981b4e4a\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.786127 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-dns-swift-storage-0\") pod \"8103b038-eb16-4430-b466-0a02981b4e4a\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.786175 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-dns-svc\") pod \"8103b038-eb16-4430-b466-0a02981b4e4a\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.786283 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b75k\" (UniqueName: \"kubernetes.io/projected/8103b038-eb16-4430-b466-0a02981b4e4a-kube-api-access-6b75k\") pod \"8103b038-eb16-4430-b466-0a02981b4e4a\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.786305 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-ovsdbserver-nb\") pod \"8103b038-eb16-4430-b466-0a02981b4e4a\" (UID: \"8103b038-eb16-4430-b466-0a02981b4e4a\") " Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.810613 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8103b038-eb16-4430-b466-0a02981b4e4a-kube-api-access-6b75k" (OuterVolumeSpecName: "kube-api-access-6b75k") pod "8103b038-eb16-4430-b466-0a02981b4e4a" (UID: "8103b038-eb16-4430-b466-0a02981b4e4a"). InnerVolumeSpecName "kube-api-access-6b75k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.880651 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-config" (OuterVolumeSpecName: "config") pod "8103b038-eb16-4430-b466-0a02981b4e4a" (UID: "8103b038-eb16-4430-b466-0a02981b4e4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.881288 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8103b038-eb16-4430-b466-0a02981b4e4a" (UID: "8103b038-eb16-4430-b466-0a02981b4e4a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.883097 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8103b038-eb16-4430-b466-0a02981b4e4a" (UID: "8103b038-eb16-4430-b466-0a02981b4e4a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.883787 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8103b038-eb16-4430-b466-0a02981b4e4a" (UID: "8103b038-eb16-4430-b466-0a02981b4e4a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.883831 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8103b038-eb16-4430-b466-0a02981b4e4a" (UID: "8103b038-eb16-4430-b466-0a02981b4e4a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.889320 4693 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.889354 4693 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.889367 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b75k\" (UniqueName: \"kubernetes.io/projected/8103b038-eb16-4430-b466-0a02981b4e4a-kube-api-access-6b75k\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.889382 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.889393 4693 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.889404 4693 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.916389 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "8103b038-eb16-4430-b466-0a02981b4e4a" (UID: "8103b038-eb16-4430-b466-0a02981b4e4a"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:17:41 crc kubenswrapper[4693]: I1212 16:17:41.991344 4693 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8103b038-eb16-4430-b466-0a02981b4e4a-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.365794 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.641118 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" event={"ID":"8103b038-eb16-4430-b466-0a02981b4e4a","Type":"ContainerDied","Data":"00e273df99b700c511084ae6bc8542ddbda594f006141c8d21f5a1149b8e8921"} Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.641171 4693 scope.go:117] "RemoveContainer" containerID="32f5dc79efa9a41ff3d318ba5d14411ba8eec42e061818c56c64beb2efbf3af5" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.641255 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-w2qkq" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.715175 4693 scope.go:117] "RemoveContainer" containerID="fe600ecae669a66aae1fd1d83c1cf5b55549d85a6c5d11f28304062f472364a0" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.729395 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6f88647487-x6nx4"] Dec 12 16:17:42 crc kubenswrapper[4693]: E1212 16:17:42.729990 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="199159e4-5fda-4c35-a5f3-c1d84e68b9bc" containerName="init" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.730024 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="199159e4-5fda-4c35-a5f3-c1d84e68b9bc" containerName="init" Dec 12 16:17:42 crc kubenswrapper[4693]: E1212 16:17:42.730043 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="199159e4-5fda-4c35-a5f3-c1d84e68b9bc" containerName="dnsmasq-dns" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.730049 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="199159e4-5fda-4c35-a5f3-c1d84e68b9bc" containerName="dnsmasq-dns" Dec 12 16:17:42 crc kubenswrapper[4693]: E1212 16:17:42.730080 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8103b038-eb16-4430-b466-0a02981b4e4a" containerName="dnsmasq-dns" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.730086 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="8103b038-eb16-4430-b466-0a02981b4e4a" containerName="dnsmasq-dns" Dec 12 16:17:42 crc kubenswrapper[4693]: E1212 16:17:42.730103 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8103b038-eb16-4430-b466-0a02981b4e4a" containerName="init" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.730109 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="8103b038-eb16-4430-b466-0a02981b4e4a" containerName="init" Dec 12 16:17:42 crc kubenswrapper[4693]: E1212 16:17:42.730133 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e457e88c-30e2-45af-8a1c-d3056402343b" containerName="heat-db-sync" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.730139 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="e457e88c-30e2-45af-8a1c-d3056402343b" containerName="heat-db-sync" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.730385 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="e457e88c-30e2-45af-8a1c-d3056402343b" containerName="heat-db-sync" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.730400 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="8103b038-eb16-4430-b466-0a02981b4e4a" containerName="dnsmasq-dns" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.730409 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="199159e4-5fda-4c35-a5f3-c1d84e68b9bc" containerName="dnsmasq-dns" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.731282 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6f88647487-x6nx4" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.744000 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-w2qkq"] Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.760581 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-w2qkq"] Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.773187 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6f88647487-x6nx4"] Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.807963 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9358ae82-e228-4d1e-8d68-8ff49a9bbdc1-combined-ca-bundle\") pod \"heat-engine-6f88647487-x6nx4\" (UID: \"9358ae82-e228-4d1e-8d68-8ff49a9bbdc1\") " pod="openstack/heat-engine-6f88647487-x6nx4" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.808040 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9358ae82-e228-4d1e-8d68-8ff49a9bbdc1-config-data-custom\") pod \"heat-engine-6f88647487-x6nx4\" (UID: \"9358ae82-e228-4d1e-8d68-8ff49a9bbdc1\") " pod="openstack/heat-engine-6f88647487-x6nx4" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.808169 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmlwf\" (UniqueName: \"kubernetes.io/projected/9358ae82-e228-4d1e-8d68-8ff49a9bbdc1-kube-api-access-qmlwf\") pod \"heat-engine-6f88647487-x6nx4\" (UID: \"9358ae82-e228-4d1e-8d68-8ff49a9bbdc1\") " pod="openstack/heat-engine-6f88647487-x6nx4" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.808236 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9358ae82-e228-4d1e-8d68-8ff49a9bbdc1-config-data\") pod \"heat-engine-6f88647487-x6nx4\" (UID: \"9358ae82-e228-4d1e-8d68-8ff49a9bbdc1\") " pod="openstack/heat-engine-6f88647487-x6nx4" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.848295 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-666b75576f-n2mqg"] Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.850395 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-666b75576f-n2mqg" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.877149 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-666b75576f-n2mqg"] Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.911021 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2530fdec-8001-43a2-a0dd-2735ef97ef57-config-data-custom\") pod \"heat-api-666b75576f-n2mqg\" (UID: \"2530fdec-8001-43a2-a0dd-2735ef97ef57\") " pod="openstack/heat-api-666b75576f-n2mqg" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.911184 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9358ae82-e228-4d1e-8d68-8ff49a9bbdc1-combined-ca-bundle\") pod \"heat-engine-6f88647487-x6nx4\" (UID: \"9358ae82-e228-4d1e-8d68-8ff49a9bbdc1\") " pod="openstack/heat-engine-6f88647487-x6nx4" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.911382 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9358ae82-e228-4d1e-8d68-8ff49a9bbdc1-config-data-custom\") pod \"heat-engine-6f88647487-x6nx4\" (UID: \"9358ae82-e228-4d1e-8d68-8ff49a9bbdc1\") " pod="openstack/heat-engine-6f88647487-x6nx4" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.911470 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2530fdec-8001-43a2-a0dd-2735ef97ef57-combined-ca-bundle\") pod \"heat-api-666b75576f-n2mqg\" (UID: \"2530fdec-8001-43a2-a0dd-2735ef97ef57\") " pod="openstack/heat-api-666b75576f-n2mqg" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.911505 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2530fdec-8001-43a2-a0dd-2735ef97ef57-config-data\") pod \"heat-api-666b75576f-n2mqg\" (UID: \"2530fdec-8001-43a2-a0dd-2735ef97ef57\") " pod="openstack/heat-api-666b75576f-n2mqg" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.911523 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q82nr\" (UniqueName: \"kubernetes.io/projected/2530fdec-8001-43a2-a0dd-2735ef97ef57-kube-api-access-q82nr\") pod \"heat-api-666b75576f-n2mqg\" (UID: \"2530fdec-8001-43a2-a0dd-2735ef97ef57\") " pod="openstack/heat-api-666b75576f-n2mqg" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.911648 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2530fdec-8001-43a2-a0dd-2735ef97ef57-internal-tls-certs\") pod \"heat-api-666b75576f-n2mqg\" (UID: \"2530fdec-8001-43a2-a0dd-2735ef97ef57\") " pod="openstack/heat-api-666b75576f-n2mqg" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.911734 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmlwf\" (UniqueName: \"kubernetes.io/projected/9358ae82-e228-4d1e-8d68-8ff49a9bbdc1-kube-api-access-qmlwf\") pod \"heat-engine-6f88647487-x6nx4\" (UID: \"9358ae82-e228-4d1e-8d68-8ff49a9bbdc1\") " pod="openstack/heat-engine-6f88647487-x6nx4" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.911879 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9358ae82-e228-4d1e-8d68-8ff49a9bbdc1-config-data\") pod \"heat-engine-6f88647487-x6nx4\" (UID: \"9358ae82-e228-4d1e-8d68-8ff49a9bbdc1\") " pod="openstack/heat-engine-6f88647487-x6nx4" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.911913 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2530fdec-8001-43a2-a0dd-2735ef97ef57-public-tls-certs\") pod \"heat-api-666b75576f-n2mqg\" (UID: \"2530fdec-8001-43a2-a0dd-2735ef97ef57\") " pod="openstack/heat-api-666b75576f-n2mqg" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.917765 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9358ae82-e228-4d1e-8d68-8ff49a9bbdc1-config-data-custom\") pod \"heat-engine-6f88647487-x6nx4\" (UID: \"9358ae82-e228-4d1e-8d68-8ff49a9bbdc1\") " pod="openstack/heat-engine-6f88647487-x6nx4" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.920931 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5fb5b97f75-jgjtw"] Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.922630 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.923887 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9358ae82-e228-4d1e-8d68-8ff49a9bbdc1-combined-ca-bundle\") pod \"heat-engine-6f88647487-x6nx4\" (UID: \"9358ae82-e228-4d1e-8d68-8ff49a9bbdc1\") " pod="openstack/heat-engine-6f88647487-x6nx4" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.924720 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9358ae82-e228-4d1e-8d68-8ff49a9bbdc1-config-data\") pod \"heat-engine-6f88647487-x6nx4\" (UID: \"9358ae82-e228-4d1e-8d68-8ff49a9bbdc1\") " pod="openstack/heat-engine-6f88647487-x6nx4" Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.944302 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5fb5b97f75-jgjtw"] Dec 12 16:17:42 crc kubenswrapper[4693]: I1212 16:17:42.944913 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmlwf\" (UniqueName: \"kubernetes.io/projected/9358ae82-e228-4d1e-8d68-8ff49a9bbdc1-kube-api-access-qmlwf\") pod \"heat-engine-6f88647487-x6nx4\" (UID: \"9358ae82-e228-4d1e-8d68-8ff49a9bbdc1\") " pod="openstack/heat-engine-6f88647487-x6nx4" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.013598 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a4a5a89-abec-4d7b-9df5-ddcc4643fca0-config-data-custom\") pod \"heat-cfnapi-5fb5b97f75-jgjtw\" (UID: \"1a4a5a89-abec-4d7b-9df5-ddcc4643fca0\") " pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.013679 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2530fdec-8001-43a2-a0dd-2735ef97ef57-public-tls-certs\") pod \"heat-api-666b75576f-n2mqg\" (UID: \"2530fdec-8001-43a2-a0dd-2735ef97ef57\") " pod="openstack/heat-api-666b75576f-n2mqg" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.013726 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbgs6\" (UniqueName: \"kubernetes.io/projected/1a4a5a89-abec-4d7b-9df5-ddcc4643fca0-kube-api-access-cbgs6\") pod \"heat-cfnapi-5fb5b97f75-jgjtw\" (UID: \"1a4a5a89-abec-4d7b-9df5-ddcc4643fca0\") " pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.013807 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2530fdec-8001-43a2-a0dd-2735ef97ef57-config-data-custom\") pod \"heat-api-666b75576f-n2mqg\" (UID: \"2530fdec-8001-43a2-a0dd-2735ef97ef57\") " pod="openstack/heat-api-666b75576f-n2mqg" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.013827 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a4a5a89-abec-4d7b-9df5-ddcc4643fca0-internal-tls-certs\") pod \"heat-cfnapi-5fb5b97f75-jgjtw\" (UID: \"1a4a5a89-abec-4d7b-9df5-ddcc4643fca0\") " pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.013876 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a4a5a89-abec-4d7b-9df5-ddcc4643fca0-combined-ca-bundle\") pod \"heat-cfnapi-5fb5b97f75-jgjtw\" (UID: \"1a4a5a89-abec-4d7b-9df5-ddcc4643fca0\") " pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.013986 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2530fdec-8001-43a2-a0dd-2735ef97ef57-combined-ca-bundle\") pod \"heat-api-666b75576f-n2mqg\" (UID: \"2530fdec-8001-43a2-a0dd-2735ef97ef57\") " pod="openstack/heat-api-666b75576f-n2mqg" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.014039 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2530fdec-8001-43a2-a0dd-2735ef97ef57-config-data\") pod \"heat-api-666b75576f-n2mqg\" (UID: \"2530fdec-8001-43a2-a0dd-2735ef97ef57\") " pod="openstack/heat-api-666b75576f-n2mqg" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.014072 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q82nr\" (UniqueName: \"kubernetes.io/projected/2530fdec-8001-43a2-a0dd-2735ef97ef57-kube-api-access-q82nr\") pod \"heat-api-666b75576f-n2mqg\" (UID: \"2530fdec-8001-43a2-a0dd-2735ef97ef57\") " pod="openstack/heat-api-666b75576f-n2mqg" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.014130 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a4a5a89-abec-4d7b-9df5-ddcc4643fca0-public-tls-certs\") pod \"heat-cfnapi-5fb5b97f75-jgjtw\" (UID: \"1a4a5a89-abec-4d7b-9df5-ddcc4643fca0\") " pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.014170 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2530fdec-8001-43a2-a0dd-2735ef97ef57-internal-tls-certs\") pod \"heat-api-666b75576f-n2mqg\" (UID: \"2530fdec-8001-43a2-a0dd-2735ef97ef57\") " pod="openstack/heat-api-666b75576f-n2mqg" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.014203 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a4a5a89-abec-4d7b-9df5-ddcc4643fca0-config-data\") pod \"heat-cfnapi-5fb5b97f75-jgjtw\" (UID: \"1a4a5a89-abec-4d7b-9df5-ddcc4643fca0\") " pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.017816 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2530fdec-8001-43a2-a0dd-2735ef97ef57-internal-tls-certs\") pod \"heat-api-666b75576f-n2mqg\" (UID: \"2530fdec-8001-43a2-a0dd-2735ef97ef57\") " pod="openstack/heat-api-666b75576f-n2mqg" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.017995 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2530fdec-8001-43a2-a0dd-2735ef97ef57-combined-ca-bundle\") pod \"heat-api-666b75576f-n2mqg\" (UID: \"2530fdec-8001-43a2-a0dd-2735ef97ef57\") " pod="openstack/heat-api-666b75576f-n2mqg" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.018041 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2530fdec-8001-43a2-a0dd-2735ef97ef57-config-data-custom\") pod \"heat-api-666b75576f-n2mqg\" (UID: \"2530fdec-8001-43a2-a0dd-2735ef97ef57\") " pod="openstack/heat-api-666b75576f-n2mqg" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.018097 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2530fdec-8001-43a2-a0dd-2735ef97ef57-public-tls-certs\") pod \"heat-api-666b75576f-n2mqg\" (UID: \"2530fdec-8001-43a2-a0dd-2735ef97ef57\") " pod="openstack/heat-api-666b75576f-n2mqg" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.018235 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2530fdec-8001-43a2-a0dd-2735ef97ef57-config-data\") pod \"heat-api-666b75576f-n2mqg\" (UID: \"2530fdec-8001-43a2-a0dd-2735ef97ef57\") " pod="openstack/heat-api-666b75576f-n2mqg" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.029681 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q82nr\" (UniqueName: \"kubernetes.io/projected/2530fdec-8001-43a2-a0dd-2735ef97ef57-kube-api-access-q82nr\") pod \"heat-api-666b75576f-n2mqg\" (UID: \"2530fdec-8001-43a2-a0dd-2735ef97ef57\") " pod="openstack/heat-api-666b75576f-n2mqg" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.115972 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a4a5a89-abec-4d7b-9df5-ddcc4643fca0-combined-ca-bundle\") pod \"heat-cfnapi-5fb5b97f75-jgjtw\" (UID: \"1a4a5a89-abec-4d7b-9df5-ddcc4643fca0\") " pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.116108 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a4a5a89-abec-4d7b-9df5-ddcc4643fca0-public-tls-certs\") pod \"heat-cfnapi-5fb5b97f75-jgjtw\" (UID: \"1a4a5a89-abec-4d7b-9df5-ddcc4643fca0\") " pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.116140 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a4a5a89-abec-4d7b-9df5-ddcc4643fca0-config-data\") pod \"heat-cfnapi-5fb5b97f75-jgjtw\" (UID: \"1a4a5a89-abec-4d7b-9df5-ddcc4643fca0\") " pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.116200 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a4a5a89-abec-4d7b-9df5-ddcc4643fca0-config-data-custom\") pod \"heat-cfnapi-5fb5b97f75-jgjtw\" (UID: \"1a4a5a89-abec-4d7b-9df5-ddcc4643fca0\") " pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.116262 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbgs6\" (UniqueName: \"kubernetes.io/projected/1a4a5a89-abec-4d7b-9df5-ddcc4643fca0-kube-api-access-cbgs6\") pod \"heat-cfnapi-5fb5b97f75-jgjtw\" (UID: \"1a4a5a89-abec-4d7b-9df5-ddcc4643fca0\") " pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.116329 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a4a5a89-abec-4d7b-9df5-ddcc4643fca0-internal-tls-certs\") pod \"heat-cfnapi-5fb5b97f75-jgjtw\" (UID: \"1a4a5a89-abec-4d7b-9df5-ddcc4643fca0\") " pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.120935 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a4a5a89-abec-4d7b-9df5-ddcc4643fca0-internal-tls-certs\") pod \"heat-cfnapi-5fb5b97f75-jgjtw\" (UID: \"1a4a5a89-abec-4d7b-9df5-ddcc4643fca0\") " pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.121263 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a4a5a89-abec-4d7b-9df5-ddcc4643fca0-combined-ca-bundle\") pod \"heat-cfnapi-5fb5b97f75-jgjtw\" (UID: \"1a4a5a89-abec-4d7b-9df5-ddcc4643fca0\") " pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.121901 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a4a5a89-abec-4d7b-9df5-ddcc4643fca0-config-data-custom\") pod \"heat-cfnapi-5fb5b97f75-jgjtw\" (UID: \"1a4a5a89-abec-4d7b-9df5-ddcc4643fca0\") " pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.129935 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a4a5a89-abec-4d7b-9df5-ddcc4643fca0-public-tls-certs\") pod \"heat-cfnapi-5fb5b97f75-jgjtw\" (UID: \"1a4a5a89-abec-4d7b-9df5-ddcc4643fca0\") " pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.130394 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a4a5a89-abec-4d7b-9df5-ddcc4643fca0-config-data\") pod \"heat-cfnapi-5fb5b97f75-jgjtw\" (UID: \"1a4a5a89-abec-4d7b-9df5-ddcc4643fca0\") " pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.136886 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbgs6\" (UniqueName: \"kubernetes.io/projected/1a4a5a89-abec-4d7b-9df5-ddcc4643fca0-kube-api-access-cbgs6\") pod \"heat-cfnapi-5fb5b97f75-jgjtw\" (UID: \"1a4a5a89-abec-4d7b-9df5-ddcc4643fca0\") " pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.177900 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6f88647487-x6nx4" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.198230 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-666b75576f-n2mqg" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.305182 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.379233 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8103b038-eb16-4430-b466-0a02981b4e4a" path="/var/lib/kubelet/pods/8103b038-eb16-4430-b466-0a02981b4e4a/volumes" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.654752 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40","Type":"ContainerStarted","Data":"b715f8597df5123d0b0521b7ea55d22bfbb06bec445d205213530d7ca9177bce"} Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.686038 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.64810508 podStartE2EDuration="46.686015049s" podCreationTimestamp="2025-12-12 16:16:57 +0000 UTC" firstStartedPulling="2025-12-12 16:16:58.517592118 +0000 UTC m=+1845.686231719" lastFinishedPulling="2025-12-12 16:17:42.555502087 +0000 UTC m=+1889.724141688" observedRunningTime="2025-12-12 16:17:43.675539302 +0000 UTC m=+1890.844178903" watchObservedRunningTime="2025-12-12 16:17:43.686015049 +0000 UTC m=+1890.854654650" Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.762198 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-666b75576f-n2mqg"] Dec 12 16:17:43 crc kubenswrapper[4693]: I1212 16:17:43.854768 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6f88647487-x6nx4"] Dec 12 16:17:44 crc kubenswrapper[4693]: I1212 16:17:44.009018 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5fb5b97f75-jgjtw"] Dec 12 16:17:44 crc kubenswrapper[4693]: W1212 16:17:44.013951 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a4a5a89_abec_4d7b_9df5_ddcc4643fca0.slice/crio-d7c6bf251a69f1e76a9ed559807fc57b279a26e6a6a63ec0553eded2b0d777cd WatchSource:0}: Error finding container d7c6bf251a69f1e76a9ed559807fc57b279a26e6a6a63ec0553eded2b0d777cd: Status 404 returned error can't find the container with id d7c6bf251a69f1e76a9ed559807fc57b279a26e6a6a63ec0553eded2b0d777cd Dec 12 16:17:44 crc kubenswrapper[4693]: I1212 16:17:44.678159 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" event={"ID":"1a4a5a89-abec-4d7b-9df5-ddcc4643fca0","Type":"ContainerStarted","Data":"d7c6bf251a69f1e76a9ed559807fc57b279a26e6a6a63ec0553eded2b0d777cd"} Dec 12 16:17:44 crc kubenswrapper[4693]: I1212 16:17:44.680370 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6f88647487-x6nx4" event={"ID":"9358ae82-e228-4d1e-8d68-8ff49a9bbdc1","Type":"ContainerStarted","Data":"4ac17d591fb7c3571fac3d847c7b46aa51cd8175ec717dc9028e46e56cefed0b"} Dec 12 16:17:44 crc kubenswrapper[4693]: I1212 16:17:44.680411 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6f88647487-x6nx4" event={"ID":"9358ae82-e228-4d1e-8d68-8ff49a9bbdc1","Type":"ContainerStarted","Data":"3b83c890c6f238a9f03c6ff0ec907171501a3f5e1070d22758dc02f46be1b182"} Dec 12 16:17:44 crc kubenswrapper[4693]: I1212 16:17:44.682613 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6f88647487-x6nx4" Dec 12 16:17:44 crc kubenswrapper[4693]: I1212 16:17:44.684483 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-666b75576f-n2mqg" event={"ID":"2530fdec-8001-43a2-a0dd-2735ef97ef57","Type":"ContainerStarted","Data":"9ffc7fd4735f7183070e291cc1cf8719816fe30ee750f91b8f439d5afc5a4a56"} Dec 12 16:17:44 crc kubenswrapper[4693]: I1212 16:17:44.697558 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6f88647487-x6nx4" podStartSLOduration=2.697544248 podStartE2EDuration="2.697544248s" podCreationTimestamp="2025-12-12 16:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:44.694497764 +0000 UTC m=+1891.863137375" watchObservedRunningTime="2025-12-12 16:17:44.697544248 +0000 UTC m=+1891.866183849" Dec 12 16:17:46 crc kubenswrapper[4693]: I1212 16:17:46.357292 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:17:46 crc kubenswrapper[4693]: E1212 16:17:46.359165 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:17:46 crc kubenswrapper[4693]: I1212 16:17:46.710179 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-666b75576f-n2mqg" event={"ID":"2530fdec-8001-43a2-a0dd-2735ef97ef57","Type":"ContainerStarted","Data":"bc3834a61236752f34f945188144c3c72cfbcc653f18426907120981f6158688"} Dec 12 16:17:46 crc kubenswrapper[4693]: I1212 16:17:46.712284 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" event={"ID":"1a4a5a89-abec-4d7b-9df5-ddcc4643fca0","Type":"ContainerStarted","Data":"b90dcad0e0d3091928ef476ec5f72874daad93293b1a571a229bb6a30084d3e7"} Dec 12 16:17:46 crc kubenswrapper[4693]: I1212 16:17:46.738129 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-666b75576f-n2mqg" podStartSLOduration=2.737566601 podStartE2EDuration="4.738109253s" podCreationTimestamp="2025-12-12 16:17:42 +0000 UTC" firstStartedPulling="2025-12-12 16:17:43.752173438 +0000 UTC m=+1890.920813039" lastFinishedPulling="2025-12-12 16:17:45.75271609 +0000 UTC m=+1892.921355691" observedRunningTime="2025-12-12 16:17:46.728819689 +0000 UTC m=+1893.897459290" watchObservedRunningTime="2025-12-12 16:17:46.738109253 +0000 UTC m=+1893.906748854" Dec 12 16:17:46 crc kubenswrapper[4693]: I1212 16:17:46.750421 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" podStartSLOduration=3.012865969 podStartE2EDuration="4.750401189s" podCreationTimestamp="2025-12-12 16:17:42 +0000 UTC" firstStartedPulling="2025-12-12 16:17:44.017247326 +0000 UTC m=+1891.185886927" lastFinishedPulling="2025-12-12 16:17:45.754782556 +0000 UTC m=+1892.923422147" observedRunningTime="2025-12-12 16:17:46.747406147 +0000 UTC m=+1893.916045748" watchObservedRunningTime="2025-12-12 16:17:46.750401189 +0000 UTC m=+1893.919040790" Dec 12 16:17:47 crc kubenswrapper[4693]: I1212 16:17:47.739235 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" Dec 12 16:17:47 crc kubenswrapper[4693]: I1212 16:17:47.739790 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-666b75576f-n2mqg" Dec 12 16:17:54 crc kubenswrapper[4693]: I1212 16:17:54.946295 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" Dec 12 16:17:54 crc kubenswrapper[4693]: I1212 16:17:54.960062 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-666b75576f-n2mqg" Dec 12 16:17:55 crc kubenswrapper[4693]: I1212 16:17:55.047539 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5f78f47478-pkflg"] Dec 12 16:17:55 crc kubenswrapper[4693]: I1212 16:17:55.047787 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-5f78f47478-pkflg" podUID="e03ab816-746e-449a-8b01-e627b71362d3" containerName="heat-cfnapi" containerID="cri-o://405cf1f6ed52034b6ce730a74929b3c77d7bd2886c293cb0556bb9380199d768" gracePeriod=60 Dec 12 16:17:55 crc kubenswrapper[4693]: I1212 16:17:55.077319 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-798ccfcf74-j5zqf"] Dec 12 16:17:55 crc kubenswrapper[4693]: I1212 16:17:55.077552 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-798ccfcf74-j5zqf" podUID="d39e7363-35e4-4586-a52a-ad6b28e98eb7" containerName="heat-api" containerID="cri-o://9d02bd23400e0104e952e43ebcbc1423381106072949b44d581b5a49ac891e7f" gracePeriod=60 Dec 12 16:17:55 crc kubenswrapper[4693]: I1212 16:17:55.779113 4693 scope.go:117] "RemoveContainer" containerID="2e81ed81c502e228d7feaca2bff473b26d91b3fac4a8f0f9da9d0a0342e7f629" Dec 12 16:17:55 crc kubenswrapper[4693]: I1212 16:17:55.817704 4693 scope.go:117] "RemoveContainer" containerID="70974cf4b283d0d38da16dc0229eb470987c19b127323745e0297d03d487d4d1" Dec 12 16:17:55 crc kubenswrapper[4693]: I1212 16:17:55.913698 4693 scope.go:117] "RemoveContainer" containerID="84a1d0b1782b934a006b4d49dd57c67ed374d764caeee8eb5d84d8a09b82a882" Dec 12 16:17:58 crc kubenswrapper[4693]: I1212 16:17:58.513258 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-798ccfcf74-j5zqf" podUID="d39e7363-35e4-4586-a52a-ad6b28e98eb7" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.222:8004/healthcheck\": read tcp 10.217.0.2:55580->10.217.0.222:8004: read: connection reset by peer" Dec 12 16:17:58 crc kubenswrapper[4693]: I1212 16:17:58.531717 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-5f78f47478-pkflg" podUID="e03ab816-746e-449a-8b01-e627b71362d3" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.221:8000/healthcheck\": read tcp 10.217.0.2:36848->10.217.0.221:8000: read: connection reset by peer" Dec 12 16:17:58 crc kubenswrapper[4693]: I1212 16:17:58.902907 4693 generic.go:334] "Generic (PLEG): container finished" podID="d39e7363-35e4-4586-a52a-ad6b28e98eb7" containerID="9d02bd23400e0104e952e43ebcbc1423381106072949b44d581b5a49ac891e7f" exitCode=0 Dec 12 16:17:58 crc kubenswrapper[4693]: I1212 16:17:58.903017 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-798ccfcf74-j5zqf" event={"ID":"d39e7363-35e4-4586-a52a-ad6b28e98eb7","Type":"ContainerDied","Data":"9d02bd23400e0104e952e43ebcbc1423381106072949b44d581b5a49ac891e7f"} Dec 12 16:17:58 crc kubenswrapper[4693]: I1212 16:17:58.911212 4693 generic.go:334] "Generic (PLEG): container finished" podID="e03ab816-746e-449a-8b01-e627b71362d3" containerID="405cf1f6ed52034b6ce730a74929b3c77d7bd2886c293cb0556bb9380199d768" exitCode=0 Dec 12 16:17:58 crc kubenswrapper[4693]: I1212 16:17:58.911249 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5f78f47478-pkflg" event={"ID":"e03ab816-746e-449a-8b01-e627b71362d3","Type":"ContainerDied","Data":"405cf1f6ed52034b6ce730a74929b3c77d7bd2886c293cb0556bb9380199d768"} Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.123612 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.228383 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6cm6\" (UniqueName: \"kubernetes.io/projected/e03ab816-746e-449a-8b01-e627b71362d3-kube-api-access-h6cm6\") pod \"e03ab816-746e-449a-8b01-e627b71362d3\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.228437 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-config-data-custom\") pod \"e03ab816-746e-449a-8b01-e627b71362d3\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.228456 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-combined-ca-bundle\") pod \"e03ab816-746e-449a-8b01-e627b71362d3\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.228515 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-config-data\") pod \"e03ab816-746e-449a-8b01-e627b71362d3\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.228607 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-internal-tls-certs\") pod \"e03ab816-746e-449a-8b01-e627b71362d3\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.228729 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-public-tls-certs\") pod \"e03ab816-746e-449a-8b01-e627b71362d3\" (UID: \"e03ab816-746e-449a-8b01-e627b71362d3\") " Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.237025 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e03ab816-746e-449a-8b01-e627b71362d3" (UID: "e03ab816-746e-449a-8b01-e627b71362d3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.249648 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e03ab816-746e-449a-8b01-e627b71362d3-kube-api-access-h6cm6" (OuterVolumeSpecName: "kube-api-access-h6cm6") pod "e03ab816-746e-449a-8b01-e627b71362d3" (UID: "e03ab816-746e-449a-8b01-e627b71362d3"). InnerVolumeSpecName "kube-api-access-h6cm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.306290 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e03ab816-746e-449a-8b01-e627b71362d3" (UID: "e03ab816-746e-449a-8b01-e627b71362d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.317210 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e03ab816-746e-449a-8b01-e627b71362d3" (UID: "e03ab816-746e-449a-8b01-e627b71362d3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.332204 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6cm6\" (UniqueName: \"kubernetes.io/projected/e03ab816-746e-449a-8b01-e627b71362d3-kube-api-access-h6cm6\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.332241 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.332250 4693 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.332258 4693 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.334436 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-config-data" (OuterVolumeSpecName: "config-data") pod "e03ab816-746e-449a-8b01-e627b71362d3" (UID: "e03ab816-746e-449a-8b01-e627b71362d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.357704 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:17:59 crc kubenswrapper[4693]: E1212 16:17:59.358640 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.369555 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.434109 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-combined-ca-bundle\") pod \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.434221 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddnc4\" (UniqueName: \"kubernetes.io/projected/d39e7363-35e4-4586-a52a-ad6b28e98eb7-kube-api-access-ddnc4\") pod \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.434322 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-public-tls-certs\") pod \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.434522 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-internal-tls-certs\") pod \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.434607 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-config-data-custom\") pod \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.434645 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-config-data\") pod \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\" (UID: \"d39e7363-35e4-4586-a52a-ad6b28e98eb7\") " Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.435950 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.442916 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d39e7363-35e4-4586-a52a-ad6b28e98eb7" (UID: "d39e7363-35e4-4586-a52a-ad6b28e98eb7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.443236 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d39e7363-35e4-4586-a52a-ad6b28e98eb7-kube-api-access-ddnc4" (OuterVolumeSpecName: "kube-api-access-ddnc4") pod "d39e7363-35e4-4586-a52a-ad6b28e98eb7" (UID: "d39e7363-35e4-4586-a52a-ad6b28e98eb7"). InnerVolumeSpecName "kube-api-access-ddnc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.454086 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e03ab816-746e-449a-8b01-e627b71362d3" (UID: "e03ab816-746e-449a-8b01-e627b71362d3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.499971 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-config-data" (OuterVolumeSpecName: "config-data") pod "d39e7363-35e4-4586-a52a-ad6b28e98eb7" (UID: "d39e7363-35e4-4586-a52a-ad6b28e98eb7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.511866 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d39e7363-35e4-4586-a52a-ad6b28e98eb7" (UID: "d39e7363-35e4-4586-a52a-ad6b28e98eb7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.529708 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d39e7363-35e4-4586-a52a-ad6b28e98eb7" (UID: "d39e7363-35e4-4586-a52a-ad6b28e98eb7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.541143 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.541198 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddnc4\" (UniqueName: \"kubernetes.io/projected/d39e7363-35e4-4586-a52a-ad6b28e98eb7-kube-api-access-ddnc4\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.541211 4693 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e03ab816-746e-449a-8b01-e627b71362d3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.541222 4693 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.541231 4693 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.541240 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.578561 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d39e7363-35e4-4586-a52a-ad6b28e98eb7" (UID: "d39e7363-35e4-4586-a52a-ad6b28e98eb7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.643866 4693 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d39e7363-35e4-4586-a52a-ad6b28e98eb7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.924042 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-798ccfcf74-j5zqf" event={"ID":"d39e7363-35e4-4586-a52a-ad6b28e98eb7","Type":"ContainerDied","Data":"5735a4444d47974dd960a63d8bcf9b121c0c9829cea030232bda49278c4754e9"} Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.924096 4693 scope.go:117] "RemoveContainer" containerID="9d02bd23400e0104e952e43ebcbc1423381106072949b44d581b5a49ac891e7f" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.924055 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-798ccfcf74-j5zqf" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.926514 4693 generic.go:334] "Generic (PLEG): container finished" podID="41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed" containerID="a333cdbb65eb3f265d5a9407919166d465208e4c44618b22e78bb3a22a585fa3" exitCode=0 Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.926610 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed","Type":"ContainerDied","Data":"a333cdbb65eb3f265d5a9407919166d465208e4c44618b22e78bb3a22a585fa3"} Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.929609 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5f78f47478-pkflg" event={"ID":"e03ab816-746e-449a-8b01-e627b71362d3","Type":"ContainerDied","Data":"8ec38c9e200425e9af4cdcabd4d091f5980a1964b375bc1c9d787d3de0c866e1"} Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.929625 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5f78f47478-pkflg" Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.932333 4693 generic.go:334] "Generic (PLEG): container finished" podID="5829509e-ca60-403e-8444-81ebb63d2df5" containerID="172640f12dce0872f7626ac0632d94cd7729895b12755eb2ffc276c61082df07" exitCode=0 Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.932379 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5829509e-ca60-403e-8444-81ebb63d2df5","Type":"ContainerDied","Data":"172640f12dce0872f7626ac0632d94cd7729895b12755eb2ffc276c61082df07"} Dec 12 16:17:59 crc kubenswrapper[4693]: I1212 16:17:59.948332 4693 scope.go:117] "RemoveContainer" containerID="405cf1f6ed52034b6ce730a74929b3c77d7bd2886c293cb0556bb9380199d768" Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.198155 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-798ccfcf74-j5zqf"] Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.214009 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-798ccfcf74-j5zqf"] Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.228227 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5f78f47478-pkflg"] Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.239430 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5f78f47478-pkflg"] Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.724557 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc"] Dec 12 16:18:00 crc kubenswrapper[4693]: E1212 16:18:00.726984 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e03ab816-746e-449a-8b01-e627b71362d3" containerName="heat-cfnapi" Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.727104 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="e03ab816-746e-449a-8b01-e627b71362d3" containerName="heat-cfnapi" Dec 12 16:18:00 crc kubenswrapper[4693]: E1212 16:18:00.727245 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39e7363-35e4-4586-a52a-ad6b28e98eb7" containerName="heat-api" Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.727368 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39e7363-35e4-4586-a52a-ad6b28e98eb7" containerName="heat-api" Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.727884 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="e03ab816-746e-449a-8b01-e627b71362d3" containerName="heat-cfnapi" Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.727993 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="d39e7363-35e4-4586-a52a-ad6b28e98eb7" containerName="heat-api" Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.731979 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.735572 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.735797 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.737769 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlgf7" Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.738836 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc"] Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.756161 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.901926 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc\" (UID: \"e1e7981a-9706-4ed7-96b9-f2a1c65a6113\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.902099 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgb4w\" (UniqueName: \"kubernetes.io/projected/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-kube-api-access-mgb4w\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc\" (UID: \"e1e7981a-9706-4ed7-96b9-f2a1c65a6113\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.902172 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc\" (UID: \"e1e7981a-9706-4ed7-96b9-f2a1c65a6113\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.902239 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc\" (UID: \"e1e7981a-9706-4ed7-96b9-f2a1c65a6113\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.953997 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"41df1bb1-eee7-4eec-bd6e-1cc66d20d8ed","Type":"ContainerStarted","Data":"84bcc4a9a4e38d364e62c82a02871836a24e3ba4c91c7d1d7089a3e927d9af1e"} Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.954884 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.957382 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5829509e-ca60-403e-8444-81ebb63d2df5","Type":"ContainerStarted","Data":"6adbd7adbee3a749219f786adeb83d1a6f9f42be744cf2f0d38b410664accf05"} Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.957651 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:18:00 crc kubenswrapper[4693]: I1212 16:18:00.979635 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=45.979608042 podStartE2EDuration="45.979608042s" podCreationTimestamp="2025-12-12 16:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:18:00.975079868 +0000 UTC m=+1908.143719469" watchObservedRunningTime="2025-12-12 16:18:00.979608042 +0000 UTC m=+1908.148247643" Dec 12 16:18:01 crc kubenswrapper[4693]: I1212 16:18:01.006016 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc\" (UID: \"e1e7981a-9706-4ed7-96b9-f2a1c65a6113\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" Dec 12 16:18:01 crc kubenswrapper[4693]: I1212 16:18:01.006145 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgb4w\" (UniqueName: \"kubernetes.io/projected/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-kube-api-access-mgb4w\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc\" (UID: \"e1e7981a-9706-4ed7-96b9-f2a1c65a6113\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" Dec 12 16:18:01 crc kubenswrapper[4693]: I1212 16:18:01.006203 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc\" (UID: \"e1e7981a-9706-4ed7-96b9-f2a1c65a6113\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" Dec 12 16:18:01 crc kubenswrapper[4693]: I1212 16:18:01.006250 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc\" (UID: \"e1e7981a-9706-4ed7-96b9-f2a1c65a6113\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" Dec 12 16:18:01 crc kubenswrapper[4693]: I1212 16:18:01.012364 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc\" (UID: \"e1e7981a-9706-4ed7-96b9-f2a1c65a6113\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" Dec 12 16:18:01 crc kubenswrapper[4693]: I1212 16:18:01.012574 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc\" (UID: \"e1e7981a-9706-4ed7-96b9-f2a1c65a6113\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" Dec 12 16:18:01 crc kubenswrapper[4693]: I1212 16:18:01.015763 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=46.01574058 podStartE2EDuration="46.01574058s" podCreationTimestamp="2025-12-12 16:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:18:00.995709022 +0000 UTC m=+1908.164348623" watchObservedRunningTime="2025-12-12 16:18:01.01574058 +0000 UTC m=+1908.184380181" Dec 12 16:18:01 crc kubenswrapper[4693]: I1212 16:18:01.019410 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc\" (UID: \"e1e7981a-9706-4ed7-96b9-f2a1c65a6113\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" Dec 12 16:18:01 crc kubenswrapper[4693]: I1212 16:18:01.026982 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgb4w\" (UniqueName: \"kubernetes.io/projected/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-kube-api-access-mgb4w\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc\" (UID: \"e1e7981a-9706-4ed7-96b9-f2a1c65a6113\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" Dec 12 16:18:01 crc kubenswrapper[4693]: I1212 16:18:01.073788 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" Dec 12 16:18:01 crc kubenswrapper[4693]: I1212 16:18:01.373063 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d39e7363-35e4-4586-a52a-ad6b28e98eb7" path="/var/lib/kubelet/pods/d39e7363-35e4-4586-a52a-ad6b28e98eb7/volumes" Dec 12 16:18:01 crc kubenswrapper[4693]: I1212 16:18:01.374739 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e03ab816-746e-449a-8b01-e627b71362d3" path="/var/lib/kubelet/pods/e03ab816-746e-449a-8b01-e627b71362d3/volumes" Dec 12 16:18:01 crc kubenswrapper[4693]: I1212 16:18:01.722547 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc"] Dec 12 16:18:01 crc kubenswrapper[4693]: I1212 16:18:01.969647 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" event={"ID":"e1e7981a-9706-4ed7-96b9-f2a1c65a6113","Type":"ContainerStarted","Data":"0fd66d406065d6297b4498d722175e547c82aee5f8d76d491d0d99062594b0e8"} Dec 12 16:18:03 crc kubenswrapper[4693]: I1212 16:18:03.218466 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6f88647487-x6nx4" Dec 12 16:18:03 crc kubenswrapper[4693]: I1212 16:18:03.293581 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-776c75b6d4-cbj4w"] Dec 12 16:18:03 crc kubenswrapper[4693]: I1212 16:18:03.294119 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-776c75b6d4-cbj4w" podUID="77b98312-4447-4e00-b457-c724c0b623d3" containerName="heat-engine" containerID="cri-o://62edd93be233903199368d4a2eb49cd9c5da2bd76e170d84c9e0717ee947b966" gracePeriod=60 Dec 12 16:18:05 crc kubenswrapper[4693]: E1212 16:18:05.523096 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="62edd93be233903199368d4a2eb49cd9c5da2bd76e170d84c9e0717ee947b966" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 12 16:18:05 crc kubenswrapper[4693]: E1212 16:18:05.525691 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="62edd93be233903199368d4a2eb49cd9c5da2bd76e170d84c9e0717ee947b966" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 12 16:18:05 crc kubenswrapper[4693]: E1212 16:18:05.527584 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="62edd93be233903199368d4a2eb49cd9c5da2bd76e170d84c9e0717ee947b966" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 12 16:18:05 crc kubenswrapper[4693]: E1212 16:18:05.527666 4693 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-776c75b6d4-cbj4w" podUID="77b98312-4447-4e00-b457-c724c0b623d3" containerName="heat-engine" Dec 12 16:18:12 crc kubenswrapper[4693]: I1212 16:18:12.358649 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:18:12 crc kubenswrapper[4693]: E1212 16:18:12.359672 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:18:12 crc kubenswrapper[4693]: I1212 16:18:12.655480 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-s9j89"] Dec 12 16:18:12 crc kubenswrapper[4693]: I1212 16:18:12.667808 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-s9j89"] Dec 12 16:18:12 crc kubenswrapper[4693]: I1212 16:18:12.823474 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-p4r6w"] Dec 12 16:18:12 crc kubenswrapper[4693]: I1212 16:18:12.830890 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-p4r6w" Dec 12 16:18:12 crc kubenswrapper[4693]: I1212 16:18:12.853019 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-p4r6w"] Dec 12 16:18:12 crc kubenswrapper[4693]: I1212 16:18:12.858536 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 12 16:18:12 crc kubenswrapper[4693]: I1212 16:18:12.876050 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a78008e-b445-4968-b856-0ce60d97383f-scripts\") pod \"aodh-db-sync-p4r6w\" (UID: \"6a78008e-b445-4968-b856-0ce60d97383f\") " pod="openstack/aodh-db-sync-p4r6w" Dec 12 16:18:12 crc kubenswrapper[4693]: I1212 16:18:12.876294 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwl75\" (UniqueName: \"kubernetes.io/projected/6a78008e-b445-4968-b856-0ce60d97383f-kube-api-access-wwl75\") pod \"aodh-db-sync-p4r6w\" (UID: \"6a78008e-b445-4968-b856-0ce60d97383f\") " pod="openstack/aodh-db-sync-p4r6w" Dec 12 16:18:12 crc kubenswrapper[4693]: I1212 16:18:12.876427 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a78008e-b445-4968-b856-0ce60d97383f-combined-ca-bundle\") pod \"aodh-db-sync-p4r6w\" (UID: \"6a78008e-b445-4968-b856-0ce60d97383f\") " pod="openstack/aodh-db-sync-p4r6w" Dec 12 16:18:12 crc kubenswrapper[4693]: I1212 16:18:12.876453 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a78008e-b445-4968-b856-0ce60d97383f-config-data\") pod \"aodh-db-sync-p4r6w\" (UID: \"6a78008e-b445-4968-b856-0ce60d97383f\") " pod="openstack/aodh-db-sync-p4r6w" Dec 12 16:18:12 crc kubenswrapper[4693]: I1212 16:18:12.980217 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a78008e-b445-4968-b856-0ce60d97383f-scripts\") pod \"aodh-db-sync-p4r6w\" (UID: \"6a78008e-b445-4968-b856-0ce60d97383f\") " pod="openstack/aodh-db-sync-p4r6w" Dec 12 16:18:12 crc kubenswrapper[4693]: I1212 16:18:12.980330 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwl75\" (UniqueName: \"kubernetes.io/projected/6a78008e-b445-4968-b856-0ce60d97383f-kube-api-access-wwl75\") pod \"aodh-db-sync-p4r6w\" (UID: \"6a78008e-b445-4968-b856-0ce60d97383f\") " pod="openstack/aodh-db-sync-p4r6w" Dec 12 16:18:12 crc kubenswrapper[4693]: I1212 16:18:12.980401 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a78008e-b445-4968-b856-0ce60d97383f-combined-ca-bundle\") pod \"aodh-db-sync-p4r6w\" (UID: \"6a78008e-b445-4968-b856-0ce60d97383f\") " pod="openstack/aodh-db-sync-p4r6w" Dec 12 16:18:12 crc kubenswrapper[4693]: I1212 16:18:12.980425 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a78008e-b445-4968-b856-0ce60d97383f-config-data\") pod \"aodh-db-sync-p4r6w\" (UID: \"6a78008e-b445-4968-b856-0ce60d97383f\") " pod="openstack/aodh-db-sync-p4r6w" Dec 12 16:18:12 crc kubenswrapper[4693]: I1212 16:18:12.986748 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a78008e-b445-4968-b856-0ce60d97383f-scripts\") pod \"aodh-db-sync-p4r6w\" (UID: \"6a78008e-b445-4968-b856-0ce60d97383f\") " pod="openstack/aodh-db-sync-p4r6w" Dec 12 16:18:12 crc kubenswrapper[4693]: I1212 16:18:12.986764 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a78008e-b445-4968-b856-0ce60d97383f-combined-ca-bundle\") pod \"aodh-db-sync-p4r6w\" (UID: \"6a78008e-b445-4968-b856-0ce60d97383f\") " pod="openstack/aodh-db-sync-p4r6w" Dec 12 16:18:13 crc kubenswrapper[4693]: I1212 16:18:13.015124 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a78008e-b445-4968-b856-0ce60d97383f-config-data\") pod \"aodh-db-sync-p4r6w\" (UID: \"6a78008e-b445-4968-b856-0ce60d97383f\") " pod="openstack/aodh-db-sync-p4r6w" Dec 12 16:18:13 crc kubenswrapper[4693]: I1212 16:18:13.020256 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwl75\" (UniqueName: \"kubernetes.io/projected/6a78008e-b445-4968-b856-0ce60d97383f-kube-api-access-wwl75\") pod \"aodh-db-sync-p4r6w\" (UID: \"6a78008e-b445-4968-b856-0ce60d97383f\") " pod="openstack/aodh-db-sync-p4r6w" Dec 12 16:18:13 crc kubenswrapper[4693]: I1212 16:18:13.165784 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-p4r6w" Dec 12 16:18:13 crc kubenswrapper[4693]: I1212 16:18:13.382912 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50ace206-3061-4478-910f-dcbbf46d5f72" path="/var/lib/kubelet/pods/50ace206-3061-4478-910f-dcbbf46d5f72/volumes" Dec 12 16:18:15 crc kubenswrapper[4693]: E1212 16:18:15.523227 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="62edd93be233903199368d4a2eb49cd9c5da2bd76e170d84c9e0717ee947b966" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 12 16:18:15 crc kubenswrapper[4693]: E1212 16:18:15.525249 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="62edd93be233903199368d4a2eb49cd9c5da2bd76e170d84c9e0717ee947b966" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 12 16:18:15 crc kubenswrapper[4693]: E1212 16:18:15.527535 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="62edd93be233903199368d4a2eb49cd9c5da2bd76e170d84c9e0717ee947b966" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 12 16:18:15 crc kubenswrapper[4693]: E1212 16:18:15.527617 4693 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-776c75b6d4-cbj4w" podUID="77b98312-4447-4e00-b457-c724c0b623d3" containerName="heat-engine" Dec 12 16:18:16 crc kubenswrapper[4693]: I1212 16:18:16.037527 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 12 16:18:16 crc kubenswrapper[4693]: I1212 16:18:16.325421 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Dec 12 16:18:16 crc kubenswrapper[4693]: I1212 16:18:16.472427 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Dec 12 16:18:17 crc kubenswrapper[4693]: E1212 16:18:17.924262 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Dec 12 16:18:17 crc kubenswrapper[4693]: E1212 16:18:17.925134 4693 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 12 16:18:17 crc kubenswrapper[4693]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Dec 12 16:18:17 crc kubenswrapper[4693]: - hosts: all Dec 12 16:18:17 crc kubenswrapper[4693]: strategy: linear Dec 12 16:18:17 crc kubenswrapper[4693]: tasks: Dec 12 16:18:17 crc kubenswrapper[4693]: - name: Enable podified-repos Dec 12 16:18:17 crc kubenswrapper[4693]: become: true Dec 12 16:18:17 crc kubenswrapper[4693]: ansible.builtin.shell: | Dec 12 16:18:17 crc kubenswrapper[4693]: set -euxo pipefail Dec 12 16:18:17 crc kubenswrapper[4693]: pushd /var/tmp Dec 12 16:18:17 crc kubenswrapper[4693]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Dec 12 16:18:17 crc kubenswrapper[4693]: pushd repo-setup-main Dec 12 16:18:17 crc kubenswrapper[4693]: python3 -m venv ./venv Dec 12 16:18:17 crc kubenswrapper[4693]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Dec 12 16:18:17 crc kubenswrapper[4693]: ./venv/bin/repo-setup current-podified -b antelope Dec 12 16:18:17 crc kubenswrapper[4693]: popd Dec 12 16:18:17 crc kubenswrapper[4693]: rm -rf repo-setup-main Dec 12 16:18:17 crc kubenswrapper[4693]: Dec 12 16:18:17 crc kubenswrapper[4693]: Dec 12 16:18:17 crc kubenswrapper[4693]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Dec 12 16:18:17 crc kubenswrapper[4693]: edpm_override_hosts: openstack-edpm-ipam Dec 12 16:18:17 crc kubenswrapper[4693]: edpm_service_type: repo-setup Dec 12 16:18:17 crc kubenswrapper[4693]: Dec 12 16:18:17 crc kubenswrapper[4693]: Dec 12 16:18:17 crc kubenswrapper[4693]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/runner/env/ssh_key,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mgb4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc_openstack(e1e7981a-9706-4ed7-96b9-f2a1c65a6113): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Dec 12 16:18:17 crc kubenswrapper[4693]: > logger="UnhandledError" Dec 12 16:18:17 crc kubenswrapper[4693]: E1212 16:18:17.926446 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" podUID="e1e7981a-9706-4ed7-96b9-f2a1c65a6113" Dec 12 16:18:18 crc kubenswrapper[4693]: I1212 16:18:18.181451 4693 generic.go:334] "Generic (PLEG): container finished" podID="77b98312-4447-4e00-b457-c724c0b623d3" containerID="62edd93be233903199368d4a2eb49cd9c5da2bd76e170d84c9e0717ee947b966" exitCode=0 Dec 12 16:18:18 crc kubenswrapper[4693]: I1212 16:18:18.181554 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-776c75b6d4-cbj4w" event={"ID":"77b98312-4447-4e00-b457-c724c0b623d3","Type":"ContainerDied","Data":"62edd93be233903199368d4a2eb49cd9c5da2bd76e170d84c9e0717ee947b966"} Dec 12 16:18:18 crc kubenswrapper[4693]: E1212 16:18:18.183937 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" podUID="e1e7981a-9706-4ed7-96b9-f2a1c65a6113" Dec 12 16:18:18 crc kubenswrapper[4693]: I1212 16:18:18.506625 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-776c75b6d4-cbj4w" Dec 12 16:18:18 crc kubenswrapper[4693]: I1212 16:18:18.561752 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77b98312-4447-4e00-b457-c724c0b623d3-config-data-custom\") pod \"77b98312-4447-4e00-b457-c724c0b623d3\" (UID: \"77b98312-4447-4e00-b457-c724c0b623d3\") " Dec 12 16:18:18 crc kubenswrapper[4693]: I1212 16:18:18.561833 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b98312-4447-4e00-b457-c724c0b623d3-config-data\") pod \"77b98312-4447-4e00-b457-c724c0b623d3\" (UID: \"77b98312-4447-4e00-b457-c724c0b623d3\") " Dec 12 16:18:18 crc kubenswrapper[4693]: I1212 16:18:18.561858 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b98312-4447-4e00-b457-c724c0b623d3-combined-ca-bundle\") pod \"77b98312-4447-4e00-b457-c724c0b623d3\" (UID: \"77b98312-4447-4e00-b457-c724c0b623d3\") " Dec 12 16:18:18 crc kubenswrapper[4693]: I1212 16:18:18.561909 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbfft\" (UniqueName: \"kubernetes.io/projected/77b98312-4447-4e00-b457-c724c0b623d3-kube-api-access-cbfft\") pod \"77b98312-4447-4e00-b457-c724c0b623d3\" (UID: \"77b98312-4447-4e00-b457-c724c0b623d3\") " Dec 12 16:18:18 crc kubenswrapper[4693]: I1212 16:18:18.569353 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77b98312-4447-4e00-b457-c724c0b623d3-kube-api-access-cbfft" (OuterVolumeSpecName: "kube-api-access-cbfft") pod "77b98312-4447-4e00-b457-c724c0b623d3" (UID: "77b98312-4447-4e00-b457-c724c0b623d3"). InnerVolumeSpecName "kube-api-access-cbfft". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:18:18 crc kubenswrapper[4693]: I1212 16:18:18.582050 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-p4r6w"] Dec 12 16:18:18 crc kubenswrapper[4693]: I1212 16:18:18.586434 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b98312-4447-4e00-b457-c724c0b623d3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "77b98312-4447-4e00-b457-c724c0b623d3" (UID: "77b98312-4447-4e00-b457-c724c0b623d3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:18:18 crc kubenswrapper[4693]: I1212 16:18:18.601620 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b98312-4447-4e00-b457-c724c0b623d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77b98312-4447-4e00-b457-c724c0b623d3" (UID: "77b98312-4447-4e00-b457-c724c0b623d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:18:18 crc kubenswrapper[4693]: I1212 16:18:18.666901 4693 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77b98312-4447-4e00-b457-c724c0b623d3-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:18 crc kubenswrapper[4693]: I1212 16:18:18.666942 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b98312-4447-4e00-b457-c724c0b623d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:18 crc kubenswrapper[4693]: I1212 16:18:18.666952 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbfft\" (UniqueName: \"kubernetes.io/projected/77b98312-4447-4e00-b457-c724c0b623d3-kube-api-access-cbfft\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:18 crc kubenswrapper[4693]: I1212 16:18:18.686443 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b98312-4447-4e00-b457-c724c0b623d3-config-data" (OuterVolumeSpecName: "config-data") pod "77b98312-4447-4e00-b457-c724c0b623d3" (UID: "77b98312-4447-4e00-b457-c724c0b623d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:18:18 crc kubenswrapper[4693]: I1212 16:18:18.769496 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b98312-4447-4e00-b457-c724c0b623d3-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:19 crc kubenswrapper[4693]: I1212 16:18:19.196223 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-p4r6w" event={"ID":"6a78008e-b445-4968-b856-0ce60d97383f","Type":"ContainerStarted","Data":"01fc62f906c587616db9d5a5ac45fbaa1be0b6d0d7cce8b9a4ff569fd2f00be7"} Dec 12 16:18:19 crc kubenswrapper[4693]: I1212 16:18:19.198146 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-776c75b6d4-cbj4w" event={"ID":"77b98312-4447-4e00-b457-c724c0b623d3","Type":"ContainerDied","Data":"041ef6247ea3d0a856ae424e427936ef1cfc16598d8e981f26ca4e5b527434db"} Dec 12 16:18:19 crc kubenswrapper[4693]: I1212 16:18:19.198199 4693 scope.go:117] "RemoveContainer" containerID="62edd93be233903199368d4a2eb49cd9c5da2bd76e170d84c9e0717ee947b966" Dec 12 16:18:19 crc kubenswrapper[4693]: I1212 16:18:19.198367 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-776c75b6d4-cbj4w" Dec 12 16:18:19 crc kubenswrapper[4693]: I1212 16:18:19.239610 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-776c75b6d4-cbj4w"] Dec 12 16:18:19 crc kubenswrapper[4693]: I1212 16:18:19.251212 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-776c75b6d4-cbj4w"] Dec 12 16:18:19 crc kubenswrapper[4693]: I1212 16:18:19.404893 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77b98312-4447-4e00-b457-c724c0b623d3" path="/var/lib/kubelet/pods/77b98312-4447-4e00-b457-c724c0b623d3/volumes" Dec 12 16:18:21 crc kubenswrapper[4693]: I1212 16:18:21.617559 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="62a37a53-6f53-4b51-b493-edfdb42c3a93" containerName="rabbitmq" containerID="cri-o://1b413ccc7a0fa4d9e605786f276abb135dc8197b612a15236191c260017d896a" gracePeriod=604795 Dec 12 16:18:23 crc kubenswrapper[4693]: I1212 16:18:23.374214 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:18:25 crc kubenswrapper[4693]: I1212 16:18:25.055876 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 12 16:18:26 crc kubenswrapper[4693]: I1212 16:18:26.287682 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-p4r6w" event={"ID":"6a78008e-b445-4968-b856-0ce60d97383f","Type":"ContainerStarted","Data":"9efafdf7433ee5f1ca6b75bc210f67349756dec9285ecd1637a965b3b167608c"} Dec 12 16:18:26 crc kubenswrapper[4693]: I1212 16:18:26.292438 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerStarted","Data":"6c3c61a76193f5b8920ed6ca3953db8f9d6878fcc15435a84ab980dcf1f2a982"} Dec 12 16:18:26 crc kubenswrapper[4693]: I1212 16:18:26.318212 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-p4r6w" podStartSLOduration=7.846678416 podStartE2EDuration="14.318186391s" podCreationTimestamp="2025-12-12 16:18:12 +0000 UTC" firstStartedPulling="2025-12-12 16:18:18.581118983 +0000 UTC m=+1925.749758584" lastFinishedPulling="2025-12-12 16:18:25.052626958 +0000 UTC m=+1932.221266559" observedRunningTime="2025-12-12 16:18:26.305001811 +0000 UTC m=+1933.473641422" watchObservedRunningTime="2025-12-12 16:18:26.318186391 +0000 UTC m=+1933.486825992" Dec 12 16:18:27 crc kubenswrapper[4693]: I1212 16:18:27.517840 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="62a37a53-6f53-4b51-b493-edfdb42c3a93" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.326875 4693 generic.go:334] "Generic (PLEG): container finished" podID="6a78008e-b445-4968-b856-0ce60d97383f" containerID="9efafdf7433ee5f1ca6b75bc210f67349756dec9285ecd1637a965b3b167608c" exitCode=0 Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.327390 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-p4r6w" event={"ID":"6a78008e-b445-4968-b856-0ce60d97383f","Type":"ContainerDied","Data":"9efafdf7433ee5f1ca6b75bc210f67349756dec9285ecd1637a965b3b167608c"} Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.332850 4693 generic.go:334] "Generic (PLEG): container finished" podID="62a37a53-6f53-4b51-b493-edfdb42c3a93" containerID="1b413ccc7a0fa4d9e605786f276abb135dc8197b612a15236191c260017d896a" exitCode=0 Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.332891 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"62a37a53-6f53-4b51-b493-edfdb42c3a93","Type":"ContainerDied","Data":"1b413ccc7a0fa4d9e605786f276abb135dc8197b612a15236191c260017d896a"} Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.478255 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.530356 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/62a37a53-6f53-4b51-b493-edfdb42c3a93-erlang-cookie-secret\") pod \"62a37a53-6f53-4b51-b493-edfdb42c3a93\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.530501 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-confd\") pod \"62a37a53-6f53-4b51-b493-edfdb42c3a93\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.530533 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/62a37a53-6f53-4b51-b493-edfdb42c3a93-server-conf\") pod \"62a37a53-6f53-4b51-b493-edfdb42c3a93\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.530579 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/62a37a53-6f53-4b51-b493-edfdb42c3a93-pod-info\") pod \"62a37a53-6f53-4b51-b493-edfdb42c3a93\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.530605 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-erlang-cookie\") pod \"62a37a53-6f53-4b51-b493-edfdb42c3a93\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.530639 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-tls\") pod \"62a37a53-6f53-4b51-b493-edfdb42c3a93\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.530705 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62a37a53-6f53-4b51-b493-edfdb42c3a93-config-data\") pod \"62a37a53-6f53-4b51-b493-edfdb42c3a93\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.530725 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72fpp\" (UniqueName: \"kubernetes.io/projected/62a37a53-6f53-4b51-b493-edfdb42c3a93-kube-api-access-72fpp\") pod \"62a37a53-6f53-4b51-b493-edfdb42c3a93\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.530751 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-plugins\") pod \"62a37a53-6f53-4b51-b493-edfdb42c3a93\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.531841 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acab46b2-5b55-4ecf-895b-93f481c6d063\") pod \"62a37a53-6f53-4b51-b493-edfdb42c3a93\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.531881 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/62a37a53-6f53-4b51-b493-edfdb42c3a93-plugins-conf\") pod \"62a37a53-6f53-4b51-b493-edfdb42c3a93\" (UID: \"62a37a53-6f53-4b51-b493-edfdb42c3a93\") " Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.536876 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "62a37a53-6f53-4b51-b493-edfdb42c3a93" (UID: "62a37a53-6f53-4b51-b493-edfdb42c3a93"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.537859 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62a37a53-6f53-4b51-b493-edfdb42c3a93-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "62a37a53-6f53-4b51-b493-edfdb42c3a93" (UID: "62a37a53-6f53-4b51-b493-edfdb42c3a93"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.543332 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "62a37a53-6f53-4b51-b493-edfdb42c3a93" (UID: "62a37a53-6f53-4b51-b493-edfdb42c3a93"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.543629 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a37a53-6f53-4b51-b493-edfdb42c3a93-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "62a37a53-6f53-4b51-b493-edfdb42c3a93" (UID: "62a37a53-6f53-4b51-b493-edfdb42c3a93"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.544023 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "62a37a53-6f53-4b51-b493-edfdb42c3a93" (UID: "62a37a53-6f53-4b51-b493-edfdb42c3a93"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.546646 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62a37a53-6f53-4b51-b493-edfdb42c3a93-kube-api-access-72fpp" (OuterVolumeSpecName: "kube-api-access-72fpp") pod "62a37a53-6f53-4b51-b493-edfdb42c3a93" (UID: "62a37a53-6f53-4b51-b493-edfdb42c3a93"). InnerVolumeSpecName "kube-api-access-72fpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.576468 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/62a37a53-6f53-4b51-b493-edfdb42c3a93-pod-info" (OuterVolumeSpecName: "pod-info") pod "62a37a53-6f53-4b51-b493-edfdb42c3a93" (UID: "62a37a53-6f53-4b51-b493-edfdb42c3a93"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.599023 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acab46b2-5b55-4ecf-895b-93f481c6d063" (OuterVolumeSpecName: "persistence") pod "62a37a53-6f53-4b51-b493-edfdb42c3a93" (UID: "62a37a53-6f53-4b51-b493-edfdb42c3a93"). InnerVolumeSpecName "pvc-acab46b2-5b55-4ecf-895b-93f481c6d063". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.602292 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62a37a53-6f53-4b51-b493-edfdb42c3a93-config-data" (OuterVolumeSpecName: "config-data") pod "62a37a53-6f53-4b51-b493-edfdb42c3a93" (UID: "62a37a53-6f53-4b51-b493-edfdb42c3a93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.634797 4693 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-acab46b2-5b55-4ecf-895b-93f481c6d063\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acab46b2-5b55-4ecf-895b-93f481c6d063\") on node \"crc\" " Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.634832 4693 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/62a37a53-6f53-4b51-b493-edfdb42c3a93-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.634843 4693 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/62a37a53-6f53-4b51-b493-edfdb42c3a93-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.634851 4693 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/62a37a53-6f53-4b51-b493-edfdb42c3a93-pod-info\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.634861 4693 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.634871 4693 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.634879 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62a37a53-6f53-4b51-b493-edfdb42c3a93-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.634890 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72fpp\" (UniqueName: \"kubernetes.io/projected/62a37a53-6f53-4b51-b493-edfdb42c3a93-kube-api-access-72fpp\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.634899 4693 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.664493 4693 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.664842 4693 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-acab46b2-5b55-4ecf-895b-93f481c6d063" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acab46b2-5b55-4ecf-895b-93f481c6d063") on node "crc" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.678874 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62a37a53-6f53-4b51-b493-edfdb42c3a93-server-conf" (OuterVolumeSpecName: "server-conf") pod "62a37a53-6f53-4b51-b493-edfdb42c3a93" (UID: "62a37a53-6f53-4b51-b493-edfdb42c3a93"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.737118 4693 reconciler_common.go:293] "Volume detached for volume \"pvc-acab46b2-5b55-4ecf-895b-93f481c6d063\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acab46b2-5b55-4ecf-895b-93f481c6d063\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.737167 4693 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/62a37a53-6f53-4b51-b493-edfdb42c3a93-server-conf\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.746701 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "62a37a53-6f53-4b51-b493-edfdb42c3a93" (UID: "62a37a53-6f53-4b51-b493-edfdb42c3a93"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:18:28 crc kubenswrapper[4693]: I1212 16:18:28.839016 4693 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/62a37a53-6f53-4b51-b493-edfdb42c3a93-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.353052 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.353021 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"62a37a53-6f53-4b51-b493-edfdb42c3a93","Type":"ContainerDied","Data":"c6090bf6f211e1ed5df5a86159a126cefedd762fd8c7b221d39e99193bf47586"} Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.353223 4693 scope.go:117] "RemoveContainer" containerID="1b413ccc7a0fa4d9e605786f276abb135dc8197b612a15236191c260017d896a" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.407990 4693 scope.go:117] "RemoveContainer" containerID="22692ec927135356ca36a5a4cb2f6741bb3735d358eacd1c581a9edc0bb9e830" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.424327 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.445107 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.458124 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Dec 12 16:18:29 crc kubenswrapper[4693]: E1212 16:18:29.460547 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a37a53-6f53-4b51-b493-edfdb42c3a93" containerName="setup-container" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.460577 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a37a53-6f53-4b51-b493-edfdb42c3a93" containerName="setup-container" Dec 12 16:18:29 crc kubenswrapper[4693]: E1212 16:18:29.460590 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a37a53-6f53-4b51-b493-edfdb42c3a93" containerName="rabbitmq" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.460596 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a37a53-6f53-4b51-b493-edfdb42c3a93" containerName="rabbitmq" Dec 12 16:18:29 crc kubenswrapper[4693]: E1212 16:18:29.460615 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77b98312-4447-4e00-b457-c724c0b623d3" containerName="heat-engine" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.460621 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="77b98312-4447-4e00-b457-c724c0b623d3" containerName="heat-engine" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.460901 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a37a53-6f53-4b51-b493-edfdb42c3a93" containerName="rabbitmq" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.460940 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="77b98312-4447-4e00-b457-c724c0b623d3" containerName="heat-engine" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.463230 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.469585 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.554674 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8e84ac54-c034-447a-99a4-3050a7d7eb18-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.554753 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e84ac54-c034-447a-99a4-3050a7d7eb18-config-data\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.554777 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8e84ac54-c034-447a-99a4-3050a7d7eb18-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.554881 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8e84ac54-c034-447a-99a4-3050a7d7eb18-pod-info\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.555060 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8e84ac54-c034-447a-99a4-3050a7d7eb18-server-conf\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.555109 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mbct\" (UniqueName: \"kubernetes.io/projected/8e84ac54-c034-447a-99a4-3050a7d7eb18-kube-api-access-7mbct\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.555211 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8e84ac54-c034-447a-99a4-3050a7d7eb18-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.555709 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-acab46b2-5b55-4ecf-895b-93f481c6d063\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acab46b2-5b55-4ecf-895b-93f481c6d063\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.555781 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8e84ac54-c034-447a-99a4-3050a7d7eb18-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.555807 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8e84ac54-c034-447a-99a4-3050a7d7eb18-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.555841 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8e84ac54-c034-447a-99a4-3050a7d7eb18-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.658176 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-acab46b2-5b55-4ecf-895b-93f481c6d063\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acab46b2-5b55-4ecf-895b-93f481c6d063\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.658267 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8e84ac54-c034-447a-99a4-3050a7d7eb18-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.658360 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8e84ac54-c034-447a-99a4-3050a7d7eb18-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.658390 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8e84ac54-c034-447a-99a4-3050a7d7eb18-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.658478 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8e84ac54-c034-447a-99a4-3050a7d7eb18-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.658535 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e84ac54-c034-447a-99a4-3050a7d7eb18-config-data\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.658552 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8e84ac54-c034-447a-99a4-3050a7d7eb18-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.658592 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8e84ac54-c034-447a-99a4-3050a7d7eb18-pod-info\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.658628 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8e84ac54-c034-447a-99a4-3050a7d7eb18-server-conf\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.658668 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mbct\" (UniqueName: \"kubernetes.io/projected/8e84ac54-c034-447a-99a4-3050a7d7eb18-kube-api-access-7mbct\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.658707 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8e84ac54-c034-447a-99a4-3050a7d7eb18-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.659651 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8e84ac54-c034-447a-99a4-3050a7d7eb18-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.659889 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8e84ac54-c034-447a-99a4-3050a7d7eb18-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.660512 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8e84ac54-c034-447a-99a4-3050a7d7eb18-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.662417 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e84ac54-c034-447a-99a4-3050a7d7eb18-config-data\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.663091 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8e84ac54-c034-447a-99a4-3050a7d7eb18-server-conf\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.664247 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.664287 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-acab46b2-5b55-4ecf-895b-93f481c6d063\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acab46b2-5b55-4ecf-895b-93f481c6d063\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7659dab294232f1341c4bc34db4715b7842ffd31c77cab48e5c6a75713a05aea/globalmount\"" pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.664410 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8e84ac54-c034-447a-99a4-3050a7d7eb18-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.665155 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8e84ac54-c034-447a-99a4-3050a7d7eb18-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.666682 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8e84ac54-c034-447a-99a4-3050a7d7eb18-pod-info\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.669372 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8e84ac54-c034-447a-99a4-3050a7d7eb18-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.682036 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mbct\" (UniqueName: \"kubernetes.io/projected/8e84ac54-c034-447a-99a4-3050a7d7eb18-kube-api-access-7mbct\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.757133 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-acab46b2-5b55-4ecf-895b-93f481c6d063\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acab46b2-5b55-4ecf-895b-93f481c6d063\") pod \"rabbitmq-server-1\" (UID: \"8e84ac54-c034-447a-99a4-3050a7d7eb18\") " pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.829639 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.874728 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-p4r6w" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.971013 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a78008e-b445-4968-b856-0ce60d97383f-scripts\") pod \"6a78008e-b445-4968-b856-0ce60d97383f\" (UID: \"6a78008e-b445-4968-b856-0ce60d97383f\") " Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.971054 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a78008e-b445-4968-b856-0ce60d97383f-config-data\") pod \"6a78008e-b445-4968-b856-0ce60d97383f\" (UID: \"6a78008e-b445-4968-b856-0ce60d97383f\") " Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.971095 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a78008e-b445-4968-b856-0ce60d97383f-combined-ca-bundle\") pod \"6a78008e-b445-4968-b856-0ce60d97383f\" (UID: \"6a78008e-b445-4968-b856-0ce60d97383f\") " Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.971244 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwl75\" (UniqueName: \"kubernetes.io/projected/6a78008e-b445-4968-b856-0ce60d97383f-kube-api-access-wwl75\") pod \"6a78008e-b445-4968-b856-0ce60d97383f\" (UID: \"6a78008e-b445-4968-b856-0ce60d97383f\") " Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.979494 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a78008e-b445-4968-b856-0ce60d97383f-kube-api-access-wwl75" (OuterVolumeSpecName: "kube-api-access-wwl75") pod "6a78008e-b445-4968-b856-0ce60d97383f" (UID: "6a78008e-b445-4968-b856-0ce60d97383f"). InnerVolumeSpecName "kube-api-access-wwl75". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:18:29 crc kubenswrapper[4693]: I1212 16:18:29.980133 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a78008e-b445-4968-b856-0ce60d97383f-scripts" (OuterVolumeSpecName: "scripts") pod "6a78008e-b445-4968-b856-0ce60d97383f" (UID: "6a78008e-b445-4968-b856-0ce60d97383f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:18:30 crc kubenswrapper[4693]: I1212 16:18:30.005528 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a78008e-b445-4968-b856-0ce60d97383f-config-data" (OuterVolumeSpecName: "config-data") pod "6a78008e-b445-4968-b856-0ce60d97383f" (UID: "6a78008e-b445-4968-b856-0ce60d97383f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:18:30 crc kubenswrapper[4693]: I1212 16:18:30.034658 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a78008e-b445-4968-b856-0ce60d97383f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a78008e-b445-4968-b856-0ce60d97383f" (UID: "6a78008e-b445-4968-b856-0ce60d97383f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:18:30 crc kubenswrapper[4693]: I1212 16:18:30.074663 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a78008e-b445-4968-b856-0ce60d97383f-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:30 crc kubenswrapper[4693]: I1212 16:18:30.074709 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a78008e-b445-4968-b856-0ce60d97383f-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:30 crc kubenswrapper[4693]: I1212 16:18:30.074721 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a78008e-b445-4968-b856-0ce60d97383f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:30 crc kubenswrapper[4693]: I1212 16:18:30.074734 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwl75\" (UniqueName: \"kubernetes.io/projected/6a78008e-b445-4968-b856-0ce60d97383f-kube-api-access-wwl75\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:30 crc kubenswrapper[4693]: I1212 16:18:30.333154 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Dec 12 16:18:30 crc kubenswrapper[4693]: I1212 16:18:30.378931 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-p4r6w" event={"ID":"6a78008e-b445-4968-b856-0ce60d97383f","Type":"ContainerDied","Data":"01fc62f906c587616db9d5a5ac45fbaa1be0b6d0d7cce8b9a4ff569fd2f00be7"} Dec 12 16:18:30 crc kubenswrapper[4693]: I1212 16:18:30.378977 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01fc62f906c587616db9d5a5ac45fbaa1be0b6d0d7cce8b9a4ff569fd2f00be7" Dec 12 16:18:30 crc kubenswrapper[4693]: I1212 16:18:30.379055 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-p4r6w" Dec 12 16:18:30 crc kubenswrapper[4693]: I1212 16:18:30.385489 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"8e84ac54-c034-447a-99a4-3050a7d7eb18","Type":"ContainerStarted","Data":"91561fe2c0ece8e0e7200e5f4cc7a125fa796ef2d1f9ba7b1534dafc7be6a8ff"} Dec 12 16:18:31 crc kubenswrapper[4693]: I1212 16:18:31.382459 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62a37a53-6f53-4b51-b493-edfdb42c3a93" path="/var/lib/kubelet/pods/62a37a53-6f53-4b51-b493-edfdb42c3a93/volumes" Dec 12 16:18:31 crc kubenswrapper[4693]: I1212 16:18:31.883790 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 16:18:32 crc kubenswrapper[4693]: I1212 16:18:32.688907 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Dec 12 16:18:32 crc kubenswrapper[4693]: I1212 16:18:32.689471 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c6774670-3fa6-45d1-9174-72ceb917785b" containerName="aodh-api" containerID="cri-o://71275e3fa2b95d3a01099553abf9ab31889d1ba9e763f49446e4cb2603c4e880" gracePeriod=30 Dec 12 16:18:32 crc kubenswrapper[4693]: I1212 16:18:32.689514 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c6774670-3fa6-45d1-9174-72ceb917785b" containerName="aodh-listener" containerID="cri-o://236e1bd78d2389b3c32aa0fd37745fb99ec40786911ba9f2c7a685c2b64d39cb" gracePeriod=30 Dec 12 16:18:32 crc kubenswrapper[4693]: I1212 16:18:32.689579 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c6774670-3fa6-45d1-9174-72ceb917785b" containerName="aodh-notifier" containerID="cri-o://d7c05b41d4b991f2f918af8d817c927d7df0ea33b8a9222823725c29af8fe3e1" gracePeriod=30 Dec 12 16:18:32 crc kubenswrapper[4693]: I1212 16:18:32.689622 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c6774670-3fa6-45d1-9174-72ceb917785b" containerName="aodh-evaluator" containerID="cri-o://bb1d3db2879c11b8e55219a6ba9be7661f7b9d5356feebe9fa603e968c6ec7a5" gracePeriod=30 Dec 12 16:18:33 crc kubenswrapper[4693]: I1212 16:18:33.429101 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" event={"ID":"e1e7981a-9706-4ed7-96b9-f2a1c65a6113","Type":"ContainerStarted","Data":"d217e93755c51ac676d8a07a8d69da1375bf7ba51099ef8ab9300898fcbd1fe6"} Dec 12 16:18:33 crc kubenswrapper[4693]: I1212 16:18:33.432342 4693 generic.go:334] "Generic (PLEG): container finished" podID="c6774670-3fa6-45d1-9174-72ceb917785b" containerID="bb1d3db2879c11b8e55219a6ba9be7661f7b9d5356feebe9fa603e968c6ec7a5" exitCode=0 Dec 12 16:18:33 crc kubenswrapper[4693]: I1212 16:18:33.432375 4693 generic.go:334] "Generic (PLEG): container finished" podID="c6774670-3fa6-45d1-9174-72ceb917785b" containerID="71275e3fa2b95d3a01099553abf9ab31889d1ba9e763f49446e4cb2603c4e880" exitCode=0 Dec 12 16:18:33 crc kubenswrapper[4693]: I1212 16:18:33.432429 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6774670-3fa6-45d1-9174-72ceb917785b","Type":"ContainerDied","Data":"bb1d3db2879c11b8e55219a6ba9be7661f7b9d5356feebe9fa603e968c6ec7a5"} Dec 12 16:18:33 crc kubenswrapper[4693]: I1212 16:18:33.432491 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6774670-3fa6-45d1-9174-72ceb917785b","Type":"ContainerDied","Data":"71275e3fa2b95d3a01099553abf9ab31889d1ba9e763f49446e4cb2603c4e880"} Dec 12 16:18:33 crc kubenswrapper[4693]: I1212 16:18:33.434022 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"8e84ac54-c034-447a-99a4-3050a7d7eb18","Type":"ContainerStarted","Data":"ea82f375d7399c115a90e9619e1273632ae50486114f08b1a61dba53b0a56120"} Dec 12 16:18:33 crc kubenswrapper[4693]: I1212 16:18:33.458816 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" podStartSLOduration=3.300147232 podStartE2EDuration="33.458787979s" podCreationTimestamp="2025-12-12 16:18:00 +0000 UTC" firstStartedPulling="2025-12-12 16:18:01.719733969 +0000 UTC m=+1908.888373570" lastFinishedPulling="2025-12-12 16:18:31.878374716 +0000 UTC m=+1939.047014317" observedRunningTime="2025-12-12 16:18:33.450982936 +0000 UTC m=+1940.619622557" watchObservedRunningTime="2025-12-12 16:18:33.458787979 +0000 UTC m=+1940.627427600" Dec 12 16:18:40 crc kubenswrapper[4693]: I1212 16:18:40.544761 4693 generic.go:334] "Generic (PLEG): container finished" podID="c6774670-3fa6-45d1-9174-72ceb917785b" containerID="236e1bd78d2389b3c32aa0fd37745fb99ec40786911ba9f2c7a685c2b64d39cb" exitCode=0 Dec 12 16:18:40 crc kubenswrapper[4693]: I1212 16:18:40.545337 4693 generic.go:334] "Generic (PLEG): container finished" podID="c6774670-3fa6-45d1-9174-72ceb917785b" containerID="d7c05b41d4b991f2f918af8d817c927d7df0ea33b8a9222823725c29af8fe3e1" exitCode=0 Dec 12 16:18:40 crc kubenswrapper[4693]: I1212 16:18:40.544999 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6774670-3fa6-45d1-9174-72ceb917785b","Type":"ContainerDied","Data":"236e1bd78d2389b3c32aa0fd37745fb99ec40786911ba9f2c7a685c2b64d39cb"} Dec 12 16:18:40 crc kubenswrapper[4693]: I1212 16:18:40.545383 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6774670-3fa6-45d1-9174-72ceb917785b","Type":"ContainerDied","Data":"d7c05b41d4b991f2f918af8d817c927d7df0ea33b8a9222823725c29af8fe3e1"} Dec 12 16:18:40 crc kubenswrapper[4693]: I1212 16:18:40.857878 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 12 16:18:40 crc kubenswrapper[4693]: I1212 16:18:40.903901 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-internal-tls-certs\") pod \"c6774670-3fa6-45d1-9174-72ceb917785b\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " Dec 12 16:18:40 crc kubenswrapper[4693]: I1212 16:18:40.903999 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-config-data\") pod \"c6774670-3fa6-45d1-9174-72ceb917785b\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " Dec 12 16:18:40 crc kubenswrapper[4693]: I1212 16:18:40.904120 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-297hh\" (UniqueName: \"kubernetes.io/projected/c6774670-3fa6-45d1-9174-72ceb917785b-kube-api-access-297hh\") pod \"c6774670-3fa6-45d1-9174-72ceb917785b\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " Dec 12 16:18:40 crc kubenswrapper[4693]: I1212 16:18:40.904191 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-public-tls-certs\") pod \"c6774670-3fa6-45d1-9174-72ceb917785b\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " Dec 12 16:18:40 crc kubenswrapper[4693]: I1212 16:18:40.904293 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-combined-ca-bundle\") pod \"c6774670-3fa6-45d1-9174-72ceb917785b\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " Dec 12 16:18:40 crc kubenswrapper[4693]: I1212 16:18:40.904360 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-scripts\") pod \"c6774670-3fa6-45d1-9174-72ceb917785b\" (UID: \"c6774670-3fa6-45d1-9174-72ceb917785b\") " Dec 12 16:18:40 crc kubenswrapper[4693]: I1212 16:18:40.922635 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6774670-3fa6-45d1-9174-72ceb917785b-kube-api-access-297hh" (OuterVolumeSpecName: "kube-api-access-297hh") pod "c6774670-3fa6-45d1-9174-72ceb917785b" (UID: "c6774670-3fa6-45d1-9174-72ceb917785b"). InnerVolumeSpecName "kube-api-access-297hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:18:40 crc kubenswrapper[4693]: I1212 16:18:40.923480 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-scripts" (OuterVolumeSpecName: "scripts") pod "c6774670-3fa6-45d1-9174-72ceb917785b" (UID: "c6774670-3fa6-45d1-9174-72ceb917785b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:18:40 crc kubenswrapper[4693]: I1212 16:18:40.998727 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c6774670-3fa6-45d1-9174-72ceb917785b" (UID: "c6774670-3fa6-45d1-9174-72ceb917785b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.008669 4693 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.008707 4693 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.008720 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-297hh\" (UniqueName: \"kubernetes.io/projected/c6774670-3fa6-45d1-9174-72ceb917785b-kube-api-access-297hh\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.050873 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c6774670-3fa6-45d1-9174-72ceb917785b" (UID: "c6774670-3fa6-45d1-9174-72ceb917785b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.088220 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6774670-3fa6-45d1-9174-72ceb917785b" (UID: "c6774670-3fa6-45d1-9174-72ceb917785b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.111483 4693 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.111698 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.122266 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-config-data" (OuterVolumeSpecName: "config-data") pod "c6774670-3fa6-45d1-9174-72ceb917785b" (UID: "c6774670-3fa6-45d1-9174-72ceb917785b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.213061 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6774670-3fa6-45d1-9174-72ceb917785b-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.559918 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6774670-3fa6-45d1-9174-72ceb917785b","Type":"ContainerDied","Data":"519166189f2dd700587947ff16f42a20ee0fdc85566d795fa259035306220e7e"} Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.560231 4693 scope.go:117] "RemoveContainer" containerID="236e1bd78d2389b3c32aa0fd37745fb99ec40786911ba9f2c7a685c2b64d39cb" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.560038 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.589011 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.611337 4693 scope.go:117] "RemoveContainer" containerID="d7c05b41d4b991f2f918af8d817c927d7df0ea33b8a9222823725c29af8fe3e1" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.615979 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.632105 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Dec 12 16:18:41 crc kubenswrapper[4693]: E1212 16:18:41.632681 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6774670-3fa6-45d1-9174-72ceb917785b" containerName="aodh-evaluator" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.632701 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6774670-3fa6-45d1-9174-72ceb917785b" containerName="aodh-evaluator" Dec 12 16:18:41 crc kubenswrapper[4693]: E1212 16:18:41.632747 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6774670-3fa6-45d1-9174-72ceb917785b" containerName="aodh-api" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.632754 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6774670-3fa6-45d1-9174-72ceb917785b" containerName="aodh-api" Dec 12 16:18:41 crc kubenswrapper[4693]: E1212 16:18:41.632769 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6774670-3fa6-45d1-9174-72ceb917785b" containerName="aodh-listener" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.632778 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6774670-3fa6-45d1-9174-72ceb917785b" containerName="aodh-listener" Dec 12 16:18:41 crc kubenswrapper[4693]: E1212 16:18:41.632798 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a78008e-b445-4968-b856-0ce60d97383f" containerName="aodh-db-sync" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.632804 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a78008e-b445-4968-b856-0ce60d97383f" containerName="aodh-db-sync" Dec 12 16:18:41 crc kubenswrapper[4693]: E1212 16:18:41.632814 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6774670-3fa6-45d1-9174-72ceb917785b" containerName="aodh-notifier" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.632821 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6774670-3fa6-45d1-9174-72ceb917785b" containerName="aodh-notifier" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.633041 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6774670-3fa6-45d1-9174-72ceb917785b" containerName="aodh-notifier" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.633068 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a78008e-b445-4968-b856-0ce60d97383f" containerName="aodh-db-sync" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.633077 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6774670-3fa6-45d1-9174-72ceb917785b" containerName="aodh-evaluator" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.633087 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6774670-3fa6-45d1-9174-72ceb917785b" containerName="aodh-listener" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.633106 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6774670-3fa6-45d1-9174-72ceb917785b" containerName="aodh-api" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.635310 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.642046 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-jzr72" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.642361 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.642436 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.642634 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.642635 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.645008 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.664716 4693 scope.go:117] "RemoveContainer" containerID="bb1d3db2879c11b8e55219a6ba9be7661f7b9d5356feebe9fa603e968c6ec7a5" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.711210 4693 scope.go:117] "RemoveContainer" containerID="71275e3fa2b95d3a01099553abf9ab31889d1ba9e763f49446e4cb2603c4e880" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.735253 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f36a798-eb1b-42f9-b874-79cd5e085e41-internal-tls-certs\") pod \"aodh-0\" (UID: \"2f36a798-eb1b-42f9-b874-79cd5e085e41\") " pod="openstack/aodh-0" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.735424 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f36a798-eb1b-42f9-b874-79cd5e085e41-scripts\") pod \"aodh-0\" (UID: \"2f36a798-eb1b-42f9-b874-79cd5e085e41\") " pod="openstack/aodh-0" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.735465 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f36a798-eb1b-42f9-b874-79cd5e085e41-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2f36a798-eb1b-42f9-b874-79cd5e085e41\") " pod="openstack/aodh-0" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.735602 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f36a798-eb1b-42f9-b874-79cd5e085e41-public-tls-certs\") pod \"aodh-0\" (UID: \"2f36a798-eb1b-42f9-b874-79cd5e085e41\") " pod="openstack/aodh-0" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.735629 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f36a798-eb1b-42f9-b874-79cd5e085e41-config-data\") pod \"aodh-0\" (UID: \"2f36a798-eb1b-42f9-b874-79cd5e085e41\") " pod="openstack/aodh-0" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.735679 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9km9\" (UniqueName: \"kubernetes.io/projected/2f36a798-eb1b-42f9-b874-79cd5e085e41-kube-api-access-f9km9\") pod \"aodh-0\" (UID: \"2f36a798-eb1b-42f9-b874-79cd5e085e41\") " pod="openstack/aodh-0" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.838034 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f36a798-eb1b-42f9-b874-79cd5e085e41-public-tls-certs\") pod \"aodh-0\" (UID: \"2f36a798-eb1b-42f9-b874-79cd5e085e41\") " pod="openstack/aodh-0" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.838081 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f36a798-eb1b-42f9-b874-79cd5e085e41-config-data\") pod \"aodh-0\" (UID: \"2f36a798-eb1b-42f9-b874-79cd5e085e41\") " pod="openstack/aodh-0" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.838127 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9km9\" (UniqueName: \"kubernetes.io/projected/2f36a798-eb1b-42f9-b874-79cd5e085e41-kube-api-access-f9km9\") pod \"aodh-0\" (UID: \"2f36a798-eb1b-42f9-b874-79cd5e085e41\") " pod="openstack/aodh-0" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.838244 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f36a798-eb1b-42f9-b874-79cd5e085e41-internal-tls-certs\") pod \"aodh-0\" (UID: \"2f36a798-eb1b-42f9-b874-79cd5e085e41\") " pod="openstack/aodh-0" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.838371 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f36a798-eb1b-42f9-b874-79cd5e085e41-scripts\") pod \"aodh-0\" (UID: \"2f36a798-eb1b-42f9-b874-79cd5e085e41\") " pod="openstack/aodh-0" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.838400 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f36a798-eb1b-42f9-b874-79cd5e085e41-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2f36a798-eb1b-42f9-b874-79cd5e085e41\") " pod="openstack/aodh-0" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.843406 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f36a798-eb1b-42f9-b874-79cd5e085e41-public-tls-certs\") pod \"aodh-0\" (UID: \"2f36a798-eb1b-42f9-b874-79cd5e085e41\") " pod="openstack/aodh-0" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.843522 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f36a798-eb1b-42f9-b874-79cd5e085e41-scripts\") pod \"aodh-0\" (UID: \"2f36a798-eb1b-42f9-b874-79cd5e085e41\") " pod="openstack/aodh-0" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.843804 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f36a798-eb1b-42f9-b874-79cd5e085e41-internal-tls-certs\") pod \"aodh-0\" (UID: \"2f36a798-eb1b-42f9-b874-79cd5e085e41\") " pod="openstack/aodh-0" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.844987 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f36a798-eb1b-42f9-b874-79cd5e085e41-config-data\") pod \"aodh-0\" (UID: \"2f36a798-eb1b-42f9-b874-79cd5e085e41\") " pod="openstack/aodh-0" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.845377 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f36a798-eb1b-42f9-b874-79cd5e085e41-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2f36a798-eb1b-42f9-b874-79cd5e085e41\") " pod="openstack/aodh-0" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.859325 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9km9\" (UniqueName: \"kubernetes.io/projected/2f36a798-eb1b-42f9-b874-79cd5e085e41-kube-api-access-f9km9\") pod \"aodh-0\" (UID: \"2f36a798-eb1b-42f9-b874-79cd5e085e41\") " pod="openstack/aodh-0" Dec 12 16:18:41 crc kubenswrapper[4693]: I1212 16:18:41.959905 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 12 16:18:42 crc kubenswrapper[4693]: W1212 16:18:42.467988 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f36a798_eb1b_42f9_b874_79cd5e085e41.slice/crio-69253ac963acb833d8dacf62acf5804042e2d2841a5bec3b4670f9f736449150 WatchSource:0}: Error finding container 69253ac963acb833d8dacf62acf5804042e2d2841a5bec3b4670f9f736449150: Status 404 returned error can't find the container with id 69253ac963acb833d8dacf62acf5804042e2d2841a5bec3b4670f9f736449150 Dec 12 16:18:42 crc kubenswrapper[4693]: I1212 16:18:42.468129 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 12 16:18:42 crc kubenswrapper[4693]: I1212 16:18:42.589050 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2f36a798-eb1b-42f9-b874-79cd5e085e41","Type":"ContainerStarted","Data":"69253ac963acb833d8dacf62acf5804042e2d2841a5bec3b4670f9f736449150"} Dec 12 16:18:43 crc kubenswrapper[4693]: I1212 16:18:43.387678 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6774670-3fa6-45d1-9174-72ceb917785b" path="/var/lib/kubelet/pods/c6774670-3fa6-45d1-9174-72ceb917785b/volumes" Dec 12 16:18:43 crc kubenswrapper[4693]: I1212 16:18:43.607139 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2f36a798-eb1b-42f9-b874-79cd5e085e41","Type":"ContainerStarted","Data":"d573926ae0ee4d0a92d80ab9003ad18428a5e70f19a6285628a549af299d4561"} Dec 12 16:18:44 crc kubenswrapper[4693]: I1212 16:18:44.624351 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2f36a798-eb1b-42f9-b874-79cd5e085e41","Type":"ContainerStarted","Data":"70c238f035bebea3e7a7b3f9ea1b105fded1a72c7e9464137c8f134fb6dda54b"} Dec 12 16:18:45 crc kubenswrapper[4693]: I1212 16:18:45.638205 4693 generic.go:334] "Generic (PLEG): container finished" podID="e1e7981a-9706-4ed7-96b9-f2a1c65a6113" containerID="d217e93755c51ac676d8a07a8d69da1375bf7ba51099ef8ab9300898fcbd1fe6" exitCode=0 Dec 12 16:18:45 crc kubenswrapper[4693]: I1212 16:18:45.638583 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" event={"ID":"e1e7981a-9706-4ed7-96b9-f2a1c65a6113","Type":"ContainerDied","Data":"d217e93755c51ac676d8a07a8d69da1375bf7ba51099ef8ab9300898fcbd1fe6"} Dec 12 16:18:45 crc kubenswrapper[4693]: I1212 16:18:45.646410 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2f36a798-eb1b-42f9-b874-79cd5e085e41","Type":"ContainerStarted","Data":"cb4e2c41efedd49c8c72905aa82f1fe4223bc4a72d3b6e8f697d0904d95d3040"} Dec 12 16:18:46 crc kubenswrapper[4693]: I1212 16:18:46.660098 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2f36a798-eb1b-42f9-b874-79cd5e085e41","Type":"ContainerStarted","Data":"4425a2e092c258082d8e341c623d89288c18543ebac4926ca58178d93d8e1960"} Dec 12 16:18:46 crc kubenswrapper[4693]: I1212 16:18:46.686258 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.179976104 podStartE2EDuration="5.686238225s" podCreationTimestamp="2025-12-12 16:18:41 +0000 UTC" firstStartedPulling="2025-12-12 16:18:42.471995546 +0000 UTC m=+1949.640635147" lastFinishedPulling="2025-12-12 16:18:45.978257667 +0000 UTC m=+1953.146897268" observedRunningTime="2025-12-12 16:18:46.679361727 +0000 UTC m=+1953.848001328" watchObservedRunningTime="2025-12-12 16:18:46.686238225 +0000 UTC m=+1953.854877836" Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.302304 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.386049 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-ssh-key\") pod \"e1e7981a-9706-4ed7-96b9-f2a1c65a6113\" (UID: \"e1e7981a-9706-4ed7-96b9-f2a1c65a6113\") " Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.386162 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-repo-setup-combined-ca-bundle\") pod \"e1e7981a-9706-4ed7-96b9-f2a1c65a6113\" (UID: \"e1e7981a-9706-4ed7-96b9-f2a1c65a6113\") " Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.386244 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-inventory\") pod \"e1e7981a-9706-4ed7-96b9-f2a1c65a6113\" (UID: \"e1e7981a-9706-4ed7-96b9-f2a1c65a6113\") " Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.386419 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgb4w\" (UniqueName: \"kubernetes.io/projected/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-kube-api-access-mgb4w\") pod \"e1e7981a-9706-4ed7-96b9-f2a1c65a6113\" (UID: \"e1e7981a-9706-4ed7-96b9-f2a1c65a6113\") " Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.394150 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e1e7981a-9706-4ed7-96b9-f2a1c65a6113" (UID: "e1e7981a-9706-4ed7-96b9-f2a1c65a6113"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.394281 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-kube-api-access-mgb4w" (OuterVolumeSpecName: "kube-api-access-mgb4w") pod "e1e7981a-9706-4ed7-96b9-f2a1c65a6113" (UID: "e1e7981a-9706-4ed7-96b9-f2a1c65a6113"). InnerVolumeSpecName "kube-api-access-mgb4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.433720 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-inventory" (OuterVolumeSpecName: "inventory") pod "e1e7981a-9706-4ed7-96b9-f2a1c65a6113" (UID: "e1e7981a-9706-4ed7-96b9-f2a1c65a6113"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.439076 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e1e7981a-9706-4ed7-96b9-f2a1c65a6113" (UID: "e1e7981a-9706-4ed7-96b9-f2a1c65a6113"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.489093 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgb4w\" (UniqueName: \"kubernetes.io/projected/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-kube-api-access-mgb4w\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.489130 4693 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.489141 4693 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.489152 4693 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1e7981a-9706-4ed7-96b9-f2a1c65a6113-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.674018 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.674017 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-plwbc" event={"ID":"e1e7981a-9706-4ed7-96b9-f2a1c65a6113","Type":"ContainerDied","Data":"0fd66d406065d6297b4498d722175e547c82aee5f8d76d491d0d99062594b0e8"} Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.674126 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fd66d406065d6297b4498d722175e547c82aee5f8d76d491d0d99062594b0e8" Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.789020 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-stqdb"] Dec 12 16:18:47 crc kubenswrapper[4693]: E1212 16:18:47.790221 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e7981a-9706-4ed7-96b9-f2a1c65a6113" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.790369 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e7981a-9706-4ed7-96b9-f2a1c65a6113" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.790795 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1e7981a-9706-4ed7-96b9-f2a1c65a6113" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.791932 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-stqdb" Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.797825 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.798037 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.800055 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlgf7" Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.802329 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.826384 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-stqdb"] Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.900882 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5139b3ba-3ea1-435b-b7d4-8174b3ec9210-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-stqdb\" (UID: \"5139b3ba-3ea1-435b-b7d4-8174b3ec9210\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-stqdb" Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.901225 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prdcj\" (UniqueName: \"kubernetes.io/projected/5139b3ba-3ea1-435b-b7d4-8174b3ec9210-kube-api-access-prdcj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-stqdb\" (UID: \"5139b3ba-3ea1-435b-b7d4-8174b3ec9210\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-stqdb" Dec 12 16:18:47 crc kubenswrapper[4693]: I1212 16:18:47.901462 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5139b3ba-3ea1-435b-b7d4-8174b3ec9210-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-stqdb\" (UID: \"5139b3ba-3ea1-435b-b7d4-8174b3ec9210\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-stqdb" Dec 12 16:18:48 crc kubenswrapper[4693]: I1212 16:18:48.003422 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5139b3ba-3ea1-435b-b7d4-8174b3ec9210-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-stqdb\" (UID: \"5139b3ba-3ea1-435b-b7d4-8174b3ec9210\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-stqdb" Dec 12 16:18:48 crc kubenswrapper[4693]: I1212 16:18:48.003566 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5139b3ba-3ea1-435b-b7d4-8174b3ec9210-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-stqdb\" (UID: \"5139b3ba-3ea1-435b-b7d4-8174b3ec9210\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-stqdb" Dec 12 16:18:48 crc kubenswrapper[4693]: I1212 16:18:48.003626 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prdcj\" (UniqueName: \"kubernetes.io/projected/5139b3ba-3ea1-435b-b7d4-8174b3ec9210-kube-api-access-prdcj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-stqdb\" (UID: \"5139b3ba-3ea1-435b-b7d4-8174b3ec9210\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-stqdb" Dec 12 16:18:48 crc kubenswrapper[4693]: I1212 16:18:48.010194 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5139b3ba-3ea1-435b-b7d4-8174b3ec9210-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-stqdb\" (UID: \"5139b3ba-3ea1-435b-b7d4-8174b3ec9210\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-stqdb" Dec 12 16:18:48 crc kubenswrapper[4693]: I1212 16:18:48.017136 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5139b3ba-3ea1-435b-b7d4-8174b3ec9210-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-stqdb\" (UID: \"5139b3ba-3ea1-435b-b7d4-8174b3ec9210\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-stqdb" Dec 12 16:18:48 crc kubenswrapper[4693]: I1212 16:18:48.023976 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prdcj\" (UniqueName: \"kubernetes.io/projected/5139b3ba-3ea1-435b-b7d4-8174b3ec9210-kube-api-access-prdcj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-stqdb\" (UID: \"5139b3ba-3ea1-435b-b7d4-8174b3ec9210\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-stqdb" Dec 12 16:18:48 crc kubenswrapper[4693]: I1212 16:18:48.124543 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-stqdb" Dec 12 16:18:48 crc kubenswrapper[4693]: W1212 16:18:48.672933 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5139b3ba_3ea1_435b_b7d4_8174b3ec9210.slice/crio-a397e300929dff54843a7fa5bb6a3364578b44e0a3a2b3390307bb1c2898cd9a WatchSource:0}: Error finding container a397e300929dff54843a7fa5bb6a3364578b44e0a3a2b3390307bb1c2898cd9a: Status 404 returned error can't find the container with id a397e300929dff54843a7fa5bb6a3364578b44e0a3a2b3390307bb1c2898cd9a Dec 12 16:18:48 crc kubenswrapper[4693]: I1212 16:18:48.675564 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-stqdb"] Dec 12 16:18:48 crc kubenswrapper[4693]: I1212 16:18:48.690034 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-stqdb" event={"ID":"5139b3ba-3ea1-435b-b7d4-8174b3ec9210","Type":"ContainerStarted","Data":"a397e300929dff54843a7fa5bb6a3364578b44e0a3a2b3390307bb1c2898cd9a"} Dec 12 16:18:50 crc kubenswrapper[4693]: I1212 16:18:50.747512 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-stqdb" event={"ID":"5139b3ba-3ea1-435b-b7d4-8174b3ec9210","Type":"ContainerStarted","Data":"e9557ea2a6a6c7e3c4e7db4db8fb5c2027bb5af2bf8389dbbb3f9cf897002a6e"} Dec 12 16:18:50 crc kubenswrapper[4693]: I1212 16:18:50.767891 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-stqdb" podStartSLOduration=2.880944856 podStartE2EDuration="3.767869696s" podCreationTimestamp="2025-12-12 16:18:47 +0000 UTC" firstStartedPulling="2025-12-12 16:18:48.679926925 +0000 UTC m=+1955.848566536" lastFinishedPulling="2025-12-12 16:18:49.566851775 +0000 UTC m=+1956.735491376" observedRunningTime="2025-12-12 16:18:50.764531585 +0000 UTC m=+1957.933171186" watchObservedRunningTime="2025-12-12 16:18:50.767869696 +0000 UTC m=+1957.936509297" Dec 12 16:18:52 crc kubenswrapper[4693]: I1212 16:18:52.802100 4693 generic.go:334] "Generic (PLEG): container finished" podID="5139b3ba-3ea1-435b-b7d4-8174b3ec9210" containerID="e9557ea2a6a6c7e3c4e7db4db8fb5c2027bb5af2bf8389dbbb3f9cf897002a6e" exitCode=0 Dec 12 16:18:52 crc kubenswrapper[4693]: I1212 16:18:52.802235 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-stqdb" event={"ID":"5139b3ba-3ea1-435b-b7d4-8174b3ec9210","Type":"ContainerDied","Data":"e9557ea2a6a6c7e3c4e7db4db8fb5c2027bb5af2bf8389dbbb3f9cf897002a6e"} Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.393412 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-stqdb" Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.478157 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prdcj\" (UniqueName: \"kubernetes.io/projected/5139b3ba-3ea1-435b-b7d4-8174b3ec9210-kube-api-access-prdcj\") pod \"5139b3ba-3ea1-435b-b7d4-8174b3ec9210\" (UID: \"5139b3ba-3ea1-435b-b7d4-8174b3ec9210\") " Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.478725 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5139b3ba-3ea1-435b-b7d4-8174b3ec9210-inventory\") pod \"5139b3ba-3ea1-435b-b7d4-8174b3ec9210\" (UID: \"5139b3ba-3ea1-435b-b7d4-8174b3ec9210\") " Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.478820 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5139b3ba-3ea1-435b-b7d4-8174b3ec9210-ssh-key\") pod \"5139b3ba-3ea1-435b-b7d4-8174b3ec9210\" (UID: \"5139b3ba-3ea1-435b-b7d4-8174b3ec9210\") " Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.489932 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5139b3ba-3ea1-435b-b7d4-8174b3ec9210-kube-api-access-prdcj" (OuterVolumeSpecName: "kube-api-access-prdcj") pod "5139b3ba-3ea1-435b-b7d4-8174b3ec9210" (UID: "5139b3ba-3ea1-435b-b7d4-8174b3ec9210"). InnerVolumeSpecName "kube-api-access-prdcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.524537 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5139b3ba-3ea1-435b-b7d4-8174b3ec9210-inventory" (OuterVolumeSpecName: "inventory") pod "5139b3ba-3ea1-435b-b7d4-8174b3ec9210" (UID: "5139b3ba-3ea1-435b-b7d4-8174b3ec9210"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.536516 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5139b3ba-3ea1-435b-b7d4-8174b3ec9210-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5139b3ba-3ea1-435b-b7d4-8174b3ec9210" (UID: "5139b3ba-3ea1-435b-b7d4-8174b3ec9210"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.582056 4693 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5139b3ba-3ea1-435b-b7d4-8174b3ec9210-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.582102 4693 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5139b3ba-3ea1-435b-b7d4-8174b3ec9210-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.582116 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prdcj\" (UniqueName: \"kubernetes.io/projected/5139b3ba-3ea1-435b-b7d4-8174b3ec9210-kube-api-access-prdcj\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.828418 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-stqdb" Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.828450 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-stqdb" event={"ID":"5139b3ba-3ea1-435b-b7d4-8174b3ec9210","Type":"ContainerDied","Data":"a397e300929dff54843a7fa5bb6a3364578b44e0a3a2b3390307bb1c2898cd9a"} Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.828530 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a397e300929dff54843a7fa5bb6a3364578b44e0a3a2b3390307bb1c2898cd9a" Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.911133 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf"] Dec 12 16:18:54 crc kubenswrapper[4693]: E1212 16:18:54.911838 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5139b3ba-3ea1-435b-b7d4-8174b3ec9210" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.911858 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="5139b3ba-3ea1-435b-b7d4-8174b3ec9210" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.912179 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="5139b3ba-3ea1-435b-b7d4-8174b3ec9210" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.913389 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf" Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.915035 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlgf7" Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.915217 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.917743 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.917772 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.938185 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf"] Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.995156 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb355093-a7b2-4ab1-a525-a05808e9bd81-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf\" (UID: \"eb355093-a7b2-4ab1-a525-a05808e9bd81\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf" Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.995381 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjz95\" (UniqueName: \"kubernetes.io/projected/eb355093-a7b2-4ab1-a525-a05808e9bd81-kube-api-access-jjz95\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf\" (UID: \"eb355093-a7b2-4ab1-a525-a05808e9bd81\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf" Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.995418 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb355093-a7b2-4ab1-a525-a05808e9bd81-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf\" (UID: \"eb355093-a7b2-4ab1-a525-a05808e9bd81\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf" Dec 12 16:18:54 crc kubenswrapper[4693]: I1212 16:18:54.995570 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb355093-a7b2-4ab1-a525-a05808e9bd81-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf\" (UID: \"eb355093-a7b2-4ab1-a525-a05808e9bd81\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf" Dec 12 16:18:55 crc kubenswrapper[4693]: I1212 16:18:55.098650 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb355093-a7b2-4ab1-a525-a05808e9bd81-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf\" (UID: \"eb355093-a7b2-4ab1-a525-a05808e9bd81\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf" Dec 12 16:18:55 crc kubenswrapper[4693]: I1212 16:18:55.098729 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjz95\" (UniqueName: \"kubernetes.io/projected/eb355093-a7b2-4ab1-a525-a05808e9bd81-kube-api-access-jjz95\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf\" (UID: \"eb355093-a7b2-4ab1-a525-a05808e9bd81\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf" Dec 12 16:18:55 crc kubenswrapper[4693]: I1212 16:18:55.098970 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb355093-a7b2-4ab1-a525-a05808e9bd81-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf\" (UID: \"eb355093-a7b2-4ab1-a525-a05808e9bd81\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf" Dec 12 16:18:55 crc kubenswrapper[4693]: I1212 16:18:55.099061 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb355093-a7b2-4ab1-a525-a05808e9bd81-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf\" (UID: \"eb355093-a7b2-4ab1-a525-a05808e9bd81\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf" Dec 12 16:18:55 crc kubenswrapper[4693]: I1212 16:18:55.102591 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb355093-a7b2-4ab1-a525-a05808e9bd81-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf\" (UID: \"eb355093-a7b2-4ab1-a525-a05808e9bd81\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf" Dec 12 16:18:55 crc kubenswrapper[4693]: I1212 16:18:55.115429 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb355093-a7b2-4ab1-a525-a05808e9bd81-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf\" (UID: \"eb355093-a7b2-4ab1-a525-a05808e9bd81\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf" Dec 12 16:18:55 crc kubenswrapper[4693]: I1212 16:18:55.116186 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb355093-a7b2-4ab1-a525-a05808e9bd81-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf\" (UID: \"eb355093-a7b2-4ab1-a525-a05808e9bd81\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf" Dec 12 16:18:55 crc kubenswrapper[4693]: I1212 16:18:55.129981 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjz95\" (UniqueName: \"kubernetes.io/projected/eb355093-a7b2-4ab1-a525-a05808e9bd81-kube-api-access-jjz95\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf\" (UID: \"eb355093-a7b2-4ab1-a525-a05808e9bd81\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf" Dec 12 16:18:55 crc kubenswrapper[4693]: I1212 16:18:55.272648 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf" Dec 12 16:18:55 crc kubenswrapper[4693]: W1212 16:18:55.829530 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb355093_a7b2_4ab1_a525_a05808e9bd81.slice/crio-3b8e142e3825ab4366dc32c50abbe99a54c165c36ad160b00b96dd2a416c9d64 WatchSource:0}: Error finding container 3b8e142e3825ab4366dc32c50abbe99a54c165c36ad160b00b96dd2a416c9d64: Status 404 returned error can't find the container with id 3b8e142e3825ab4366dc32c50abbe99a54c165c36ad160b00b96dd2a416c9d64 Dec 12 16:18:55 crc kubenswrapper[4693]: I1212 16:18:55.832464 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf"] Dec 12 16:18:56 crc kubenswrapper[4693]: I1212 16:18:56.106351 4693 scope.go:117] "RemoveContainer" containerID="82943592f68ec84059ddd8b82d45d20c94d7399e765736c70a4e6ec7f3aa7fa0" Dec 12 16:18:56 crc kubenswrapper[4693]: I1212 16:18:56.860740 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf" event={"ID":"eb355093-a7b2-4ab1-a525-a05808e9bd81","Type":"ContainerStarted","Data":"3b8e142e3825ab4366dc32c50abbe99a54c165c36ad160b00b96dd2a416c9d64"} Dec 12 16:18:57 crc kubenswrapper[4693]: I1212 16:18:57.876322 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf" event={"ID":"eb355093-a7b2-4ab1-a525-a05808e9bd81","Type":"ContainerStarted","Data":"181e9966353cb66c4b1af6370085117eee995200fd1b28d2aa6ea142a8bfb65f"} Dec 12 16:18:57 crc kubenswrapper[4693]: I1212 16:18:57.908321 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf" podStartSLOduration=3.057052201 podStartE2EDuration="3.908302008s" podCreationTimestamp="2025-12-12 16:18:54 +0000 UTC" firstStartedPulling="2025-12-12 16:18:55.834469853 +0000 UTC m=+1963.003109464" lastFinishedPulling="2025-12-12 16:18:56.68571967 +0000 UTC m=+1963.854359271" observedRunningTime="2025-12-12 16:18:57.897510294 +0000 UTC m=+1965.066149915" watchObservedRunningTime="2025-12-12 16:18:57.908302008 +0000 UTC m=+1965.076941629" Dec 12 16:19:04 crc kubenswrapper[4693]: I1212 16:19:04.965038 4693 generic.go:334] "Generic (PLEG): container finished" podID="8e84ac54-c034-447a-99a4-3050a7d7eb18" containerID="ea82f375d7399c115a90e9619e1273632ae50486114f08b1a61dba53b0a56120" exitCode=0 Dec 12 16:19:04 crc kubenswrapper[4693]: I1212 16:19:04.965147 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"8e84ac54-c034-447a-99a4-3050a7d7eb18","Type":"ContainerDied","Data":"ea82f375d7399c115a90e9619e1273632ae50486114f08b1a61dba53b0a56120"} Dec 12 16:19:05 crc kubenswrapper[4693]: I1212 16:19:05.981165 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"8e84ac54-c034-447a-99a4-3050a7d7eb18","Type":"ContainerStarted","Data":"e092cabf68b0650be16d8959cec46abcfed8c3fd7400d151e5830e4f4e4af9bb"} Dec 12 16:19:05 crc kubenswrapper[4693]: I1212 16:19:05.981688 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Dec 12 16:19:06 crc kubenswrapper[4693]: I1212 16:19:06.005904 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.005877948 podStartE2EDuration="37.005877948s" podCreationTimestamp="2025-12-12 16:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:19:06.004202212 +0000 UTC m=+1973.172841853" watchObservedRunningTime="2025-12-12 16:19:06.005877948 +0000 UTC m=+1973.174517579" Dec 12 16:19:19 crc kubenswrapper[4693]: I1212 16:19:19.835663 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Dec 12 16:19:19 crc kubenswrapper[4693]: I1212 16:19:19.940876 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 12 16:19:24 crc kubenswrapper[4693]: I1212 16:19:24.029564 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="2d1046a8-e83f-4c4f-8ac3-1110bb6f62db" containerName="rabbitmq" containerID="cri-o://faad30d19c72dd4da12844f0eb713da00223d8434d0e7eb5e4614acca5061063" gracePeriod=604796 Dec 12 16:19:27 crc kubenswrapper[4693]: I1212 16:19:27.788644 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2d1046a8-e83f-4c4f-8ac3-1110bb6f62db" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.294343 4693 generic.go:334] "Generic (PLEG): container finished" podID="2d1046a8-e83f-4c4f-8ac3-1110bb6f62db" containerID="faad30d19c72dd4da12844f0eb713da00223d8434d0e7eb5e4614acca5061063" exitCode=0 Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.294441 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db","Type":"ContainerDied","Data":"faad30d19c72dd4da12844f0eb713da00223d8434d0e7eb5e4614acca5061063"} Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.295804 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db","Type":"ContainerDied","Data":"13698531da397467abb80465328e33597291eb7bbb89a23658ed1bdf255e7dbe"} Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.295926 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13698531da397467abb80465328e33597291eb7bbb89a23658ed1bdf255e7dbe" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.310646 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.448679 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-plugins\") pod \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.449412 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2d1046a8-e83f-4c4f-8ac3-1110bb6f62db" (UID: "2d1046a8-e83f-4c4f-8ac3-1110bb6f62db"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.450692 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c30820e7-7bd6-46ee-92cd-615319618f91\") pod \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.450801 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-config-data\") pod \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.450839 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-pod-info\") pod \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.450870 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-erlang-cookie\") pod \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.450909 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-plugins-conf\") pod \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.450971 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-tls\") pod \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.451005 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9mdk\" (UniqueName: \"kubernetes.io/projected/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-kube-api-access-m9mdk\") pod \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.451034 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-confd\") pod \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.451067 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-server-conf\") pod \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.451108 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-erlang-cookie-secret\") pod \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\" (UID: \"2d1046a8-e83f-4c4f-8ac3-1110bb6f62db\") " Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.453076 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2d1046a8-e83f-4c4f-8ac3-1110bb6f62db" (UID: "2d1046a8-e83f-4c4f-8ac3-1110bb6f62db"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.453087 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2d1046a8-e83f-4c4f-8ac3-1110bb6f62db" (UID: "2d1046a8-e83f-4c4f-8ac3-1110bb6f62db"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.460180 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2d1046a8-e83f-4c4f-8ac3-1110bb6f62db" (UID: "2d1046a8-e83f-4c4f-8ac3-1110bb6f62db"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.460630 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-kube-api-access-m9mdk" (OuterVolumeSpecName: "kube-api-access-m9mdk") pod "2d1046a8-e83f-4c4f-8ac3-1110bb6f62db" (UID: "2d1046a8-e83f-4c4f-8ac3-1110bb6f62db"). InnerVolumeSpecName "kube-api-access-m9mdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.460715 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2d1046a8-e83f-4c4f-8ac3-1110bb6f62db" (UID: "2d1046a8-e83f-4c4f-8ac3-1110bb6f62db"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.471194 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-pod-info" (OuterVolumeSpecName: "pod-info") pod "2d1046a8-e83f-4c4f-8ac3-1110bb6f62db" (UID: "2d1046a8-e83f-4c4f-8ac3-1110bb6f62db"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.478081 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c30820e7-7bd6-46ee-92cd-615319618f91" (OuterVolumeSpecName: "persistence") pod "2d1046a8-e83f-4c4f-8ac3-1110bb6f62db" (UID: "2d1046a8-e83f-4c4f-8ac3-1110bb6f62db"). InnerVolumeSpecName "pvc-c30820e7-7bd6-46ee-92cd-615319618f91". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.501149 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-config-data" (OuterVolumeSpecName: "config-data") pod "2d1046a8-e83f-4c4f-8ac3-1110bb6f62db" (UID: "2d1046a8-e83f-4c4f-8ac3-1110bb6f62db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.534632 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-server-conf" (OuterVolumeSpecName: "server-conf") pod "2d1046a8-e83f-4c4f-8ac3-1110bb6f62db" (UID: "2d1046a8-e83f-4c4f-8ac3-1110bb6f62db"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.553439 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.553479 4693 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-pod-info\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.553493 4693 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.553506 4693 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.553518 4693 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.553529 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9mdk\" (UniqueName: \"kubernetes.io/projected/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-kube-api-access-m9mdk\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.553540 4693 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-server-conf\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.553551 4693 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.553562 4693 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.553616 4693 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c30820e7-7bd6-46ee-92cd-615319618f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c30820e7-7bd6-46ee-92cd-615319618f91\") on node \"crc\" " Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.591383 4693 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.591628 4693 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c30820e7-7bd6-46ee-92cd-615319618f91" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c30820e7-7bd6-46ee-92cd-615319618f91") on node "crc" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.611177 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2d1046a8-e83f-4c4f-8ac3-1110bb6f62db" (UID: "2d1046a8-e83f-4c4f-8ac3-1110bb6f62db"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.655621 4693 reconciler_common.go:293] "Volume detached for volume \"pvc-c30820e7-7bd6-46ee-92cd-615319618f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c30820e7-7bd6-46ee-92cd-615319618f91\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:31 crc kubenswrapper[4693]: I1212 16:19:31.655670 4693 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.307418 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.384287 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.400378 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.484553 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 12 16:19:32 crc kubenswrapper[4693]: E1212 16:19:32.485352 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1046a8-e83f-4c4f-8ac3-1110bb6f62db" containerName="setup-container" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.485371 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1046a8-e83f-4c4f-8ac3-1110bb6f62db" containerName="setup-container" Dec 12 16:19:32 crc kubenswrapper[4693]: E1212 16:19:32.485401 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1046a8-e83f-4c4f-8ac3-1110bb6f62db" containerName="rabbitmq" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.485409 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1046a8-e83f-4c4f-8ac3-1110bb6f62db" containerName="rabbitmq" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.485714 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d1046a8-e83f-4c4f-8ac3-1110bb6f62db" containerName="rabbitmq" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.487871 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.495385 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.685443 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4694e5ac-290d-44c3-bdcb-d6f48836eb49-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.685738 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c30820e7-7bd6-46ee-92cd-615319618f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c30820e7-7bd6-46ee-92cd-615319618f91\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.685849 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4694e5ac-290d-44c3-bdcb-d6f48836eb49-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.685938 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4694e5ac-290d-44c3-bdcb-d6f48836eb49-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.686050 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4694e5ac-290d-44c3-bdcb-d6f48836eb49-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.686133 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbtmv\" (UniqueName: \"kubernetes.io/projected/4694e5ac-290d-44c3-bdcb-d6f48836eb49-kube-api-access-wbtmv\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.686232 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4694e5ac-290d-44c3-bdcb-d6f48836eb49-config-data\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.686355 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4694e5ac-290d-44c3-bdcb-d6f48836eb49-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.686470 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4694e5ac-290d-44c3-bdcb-d6f48836eb49-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.686604 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4694e5ac-290d-44c3-bdcb-d6f48836eb49-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.686689 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4694e5ac-290d-44c3-bdcb-d6f48836eb49-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.788554 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4694e5ac-290d-44c3-bdcb-d6f48836eb49-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.789020 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4694e5ac-290d-44c3-bdcb-d6f48836eb49-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.789160 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c30820e7-7bd6-46ee-92cd-615319618f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c30820e7-7bd6-46ee-92cd-615319618f91\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.789335 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4694e5ac-290d-44c3-bdcb-d6f48836eb49-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.789482 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4694e5ac-290d-44c3-bdcb-d6f48836eb49-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.789668 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4694e5ac-290d-44c3-bdcb-d6f48836eb49-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.790658 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbtmv\" (UniqueName: \"kubernetes.io/projected/4694e5ac-290d-44c3-bdcb-d6f48836eb49-kube-api-access-wbtmv\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.790181 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4694e5ac-290d-44c3-bdcb-d6f48836eb49-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.790802 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4694e5ac-290d-44c3-bdcb-d6f48836eb49-config-data\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.790908 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4694e5ac-290d-44c3-bdcb-d6f48836eb49-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.790972 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4694e5ac-290d-44c3-bdcb-d6f48836eb49-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.791124 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4694e5ac-290d-44c3-bdcb-d6f48836eb49-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.791795 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4694e5ac-290d-44c3-bdcb-d6f48836eb49-config-data\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.792100 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4694e5ac-290d-44c3-bdcb-d6f48836eb49-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.792319 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4694e5ac-290d-44c3-bdcb-d6f48836eb49-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.794027 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4694e5ac-290d-44c3-bdcb-d6f48836eb49-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.794135 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4694e5ac-290d-44c3-bdcb-d6f48836eb49-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.795075 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4694e5ac-290d-44c3-bdcb-d6f48836eb49-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.797372 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4694e5ac-290d-44c3-bdcb-d6f48836eb49-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.798644 4693 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.798675 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c30820e7-7bd6-46ee-92cd-615319618f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c30820e7-7bd6-46ee-92cd-615319618f91\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/230e84eca29ef978c6e938c2248beafe68d1fc4f5fdf1e28b05ba9d43b4abe39/globalmount\"" pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.799435 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4694e5ac-290d-44c3-bdcb-d6f48836eb49-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.819364 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbtmv\" (UniqueName: \"kubernetes.io/projected/4694e5ac-290d-44c3-bdcb-d6f48836eb49-kube-api-access-wbtmv\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:32 crc kubenswrapper[4693]: I1212 16:19:32.878239 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c30820e7-7bd6-46ee-92cd-615319618f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c30820e7-7bd6-46ee-92cd-615319618f91\") pod \"rabbitmq-server-0\" (UID: \"4694e5ac-290d-44c3-bdcb-d6f48836eb49\") " pod="openstack/rabbitmq-server-0" Dec 12 16:19:33 crc kubenswrapper[4693]: I1212 16:19:33.121754 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 12 16:19:33 crc kubenswrapper[4693]: I1212 16:19:33.376565 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d1046a8-e83f-4c4f-8ac3-1110bb6f62db" path="/var/lib/kubelet/pods/2d1046a8-e83f-4c4f-8ac3-1110bb6f62db/volumes" Dec 12 16:19:33 crc kubenswrapper[4693]: I1212 16:19:33.668925 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 12 16:19:34 crc kubenswrapper[4693]: I1212 16:19:34.349483 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4694e5ac-290d-44c3-bdcb-d6f48836eb49","Type":"ContainerStarted","Data":"4cc61e47209fa716b797e5c2bfc6506f59d89f4945540263c7b6ca378809a628"} Dec 12 16:19:36 crc kubenswrapper[4693]: I1212 16:19:36.403045 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4694e5ac-290d-44c3-bdcb-d6f48836eb49","Type":"ContainerStarted","Data":"a2acf001d5e65ba1387ba897174c6cb3e04af5b7664bb25a53fd501a6429e35d"} Dec 12 16:19:56 crc kubenswrapper[4693]: I1212 16:19:56.300484 4693 scope.go:117] "RemoveContainer" containerID="4c78635e88ffc50cb76174a91f620a324f579e0c56181a8a4d6d9312a4b81d4b" Dec 12 16:19:56 crc kubenswrapper[4693]: I1212 16:19:56.327784 4693 scope.go:117] "RemoveContainer" containerID="3167185dcb78e8afab1e2cc577722bddfcb001ac1bae3df9790b36700676e095" Dec 12 16:19:56 crc kubenswrapper[4693]: I1212 16:19:56.370432 4693 scope.go:117] "RemoveContainer" containerID="6263a31351a8b66e00866ea115ef46512c88016a614fa3e3c0c54fd7cd8bc30a" Dec 12 16:19:56 crc kubenswrapper[4693]: I1212 16:19:56.395988 4693 scope.go:117] "RemoveContainer" containerID="dee514335d58c5942ca31bc163d90679b98e6a7c1885fcd4ea02437a78426476" Dec 12 16:19:56 crc kubenswrapper[4693]: I1212 16:19:56.520538 4693 scope.go:117] "RemoveContainer" containerID="d60dbc8e9e816078cd2978fc224ebc7f601dd63e3098705808f6e4c56d6fadd2" Dec 12 16:19:56 crc kubenswrapper[4693]: I1212 16:19:56.558298 4693 scope.go:117] "RemoveContainer" containerID="13000f0f370c49923d3e6b02013db9c3642e15b1748956210a6a375d579eb6cb" Dec 12 16:19:56 crc kubenswrapper[4693]: I1212 16:19:56.601004 4693 scope.go:117] "RemoveContainer" containerID="fd96b42dabb03d7aaed49b229b246e86583eca48821e58de5daf3b8361a64c36" Dec 12 16:19:57 crc kubenswrapper[4693]: I1212 16:19:57.021773 4693 scope.go:117] "RemoveContainer" containerID="4199dbc803e76efad2b01d09b01976cd73c96ef2282fd5d932840a4833b52cff" Dec 12 16:19:57 crc kubenswrapper[4693]: I1212 16:19:57.050106 4693 scope.go:117] "RemoveContainer" containerID="faad30d19c72dd4da12844f0eb713da00223d8434d0e7eb5e4614acca5061063" Dec 12 16:20:08 crc kubenswrapper[4693]: I1212 16:20:08.789153 4693 generic.go:334] "Generic (PLEG): container finished" podID="4694e5ac-290d-44c3-bdcb-d6f48836eb49" containerID="a2acf001d5e65ba1387ba897174c6cb3e04af5b7664bb25a53fd501a6429e35d" exitCode=0 Dec 12 16:20:08 crc kubenswrapper[4693]: I1212 16:20:08.789265 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4694e5ac-290d-44c3-bdcb-d6f48836eb49","Type":"ContainerDied","Data":"a2acf001d5e65ba1387ba897174c6cb3e04af5b7664bb25a53fd501a6429e35d"} Dec 12 16:20:09 crc kubenswrapper[4693]: I1212 16:20:09.803771 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4694e5ac-290d-44c3-bdcb-d6f48836eb49","Type":"ContainerStarted","Data":"45b65253bcb2d595da9dee3f75879e4cdd8c937dc63f934e9c709af965cc4cb9"} Dec 12 16:20:09 crc kubenswrapper[4693]: I1212 16:20:09.804666 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 12 16:20:09 crc kubenswrapper[4693]: I1212 16:20:09.828708 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.828683428 podStartE2EDuration="37.828683428s" podCreationTimestamp="2025-12-12 16:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:20:09.827767223 +0000 UTC m=+2036.996406874" watchObservedRunningTime="2025-12-12 16:20:09.828683428 +0000 UTC m=+2036.997323049" Dec 12 16:20:13 crc kubenswrapper[4693]: I1212 16:20:13.683429 4693 patch_prober.go:28] interesting pod/route-controller-manager-66fdbf566b-4w29d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 16:20:13 crc kubenswrapper[4693]: I1212 16:20:13.684020 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" podUID="41b94683-51bf-4720-9160-36bd373d88ba" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 16:20:23 crc kubenswrapper[4693]: I1212 16:20:23.125693 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.531487 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hxgtr"] Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.538034 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hxgtr" Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.558068 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5754d"] Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.568587 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5754d" Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.571937 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hxgtr"] Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.592714 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5754d"] Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.619977 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qztr\" (UniqueName: \"kubernetes.io/projected/f600095c-dcb6-440d-8f24-6a7e045e4e3c-kube-api-access-8qztr\") pod \"certified-operators-5754d\" (UID: \"f600095c-dcb6-440d-8f24-6a7e045e4e3c\") " pod="openshift-marketplace/certified-operators-5754d" Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.620117 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f600095c-dcb6-440d-8f24-6a7e045e4e3c-utilities\") pod \"certified-operators-5754d\" (UID: \"f600095c-dcb6-440d-8f24-6a7e045e4e3c\") " pod="openshift-marketplace/certified-operators-5754d" Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.620239 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mtnv\" (UniqueName: \"kubernetes.io/projected/e946d2c7-ef68-4660-83c1-57261d7a5e86-kube-api-access-7mtnv\") pod \"redhat-marketplace-hxgtr\" (UID: \"e946d2c7-ef68-4660-83c1-57261d7a5e86\") " pod="openshift-marketplace/redhat-marketplace-hxgtr" Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.620318 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f600095c-dcb6-440d-8f24-6a7e045e4e3c-catalog-content\") pod \"certified-operators-5754d\" (UID: \"f600095c-dcb6-440d-8f24-6a7e045e4e3c\") " pod="openshift-marketplace/certified-operators-5754d" Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.622679 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e946d2c7-ef68-4660-83c1-57261d7a5e86-catalog-content\") pod \"redhat-marketplace-hxgtr\" (UID: \"e946d2c7-ef68-4660-83c1-57261d7a5e86\") " pod="openshift-marketplace/redhat-marketplace-hxgtr" Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.622779 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e946d2c7-ef68-4660-83c1-57261d7a5e86-utilities\") pod \"redhat-marketplace-hxgtr\" (UID: \"e946d2c7-ef68-4660-83c1-57261d7a5e86\") " pod="openshift-marketplace/redhat-marketplace-hxgtr" Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.724473 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f600095c-dcb6-440d-8f24-6a7e045e4e3c-utilities\") pod \"certified-operators-5754d\" (UID: \"f600095c-dcb6-440d-8f24-6a7e045e4e3c\") " pod="openshift-marketplace/certified-operators-5754d" Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.724566 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mtnv\" (UniqueName: \"kubernetes.io/projected/e946d2c7-ef68-4660-83c1-57261d7a5e86-kube-api-access-7mtnv\") pod \"redhat-marketplace-hxgtr\" (UID: \"e946d2c7-ef68-4660-83c1-57261d7a5e86\") " pod="openshift-marketplace/redhat-marketplace-hxgtr" Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.724594 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f600095c-dcb6-440d-8f24-6a7e045e4e3c-catalog-content\") pod \"certified-operators-5754d\" (UID: \"f600095c-dcb6-440d-8f24-6a7e045e4e3c\") " pod="openshift-marketplace/certified-operators-5754d" Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.724658 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e946d2c7-ef68-4660-83c1-57261d7a5e86-catalog-content\") pod \"redhat-marketplace-hxgtr\" (UID: \"e946d2c7-ef68-4660-83c1-57261d7a5e86\") " pod="openshift-marketplace/redhat-marketplace-hxgtr" Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.724693 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e946d2c7-ef68-4660-83c1-57261d7a5e86-utilities\") pod \"redhat-marketplace-hxgtr\" (UID: \"e946d2c7-ef68-4660-83c1-57261d7a5e86\") " pod="openshift-marketplace/redhat-marketplace-hxgtr" Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.724783 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qztr\" (UniqueName: \"kubernetes.io/projected/f600095c-dcb6-440d-8f24-6a7e045e4e3c-kube-api-access-8qztr\") pod \"certified-operators-5754d\" (UID: \"f600095c-dcb6-440d-8f24-6a7e045e4e3c\") " pod="openshift-marketplace/certified-operators-5754d" Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.725460 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f600095c-dcb6-440d-8f24-6a7e045e4e3c-catalog-content\") pod \"certified-operators-5754d\" (UID: \"f600095c-dcb6-440d-8f24-6a7e045e4e3c\") " pod="openshift-marketplace/certified-operators-5754d" Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.725750 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e946d2c7-ef68-4660-83c1-57261d7a5e86-catalog-content\") pod \"redhat-marketplace-hxgtr\" (UID: \"e946d2c7-ef68-4660-83c1-57261d7a5e86\") " pod="openshift-marketplace/redhat-marketplace-hxgtr" Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.725920 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f600095c-dcb6-440d-8f24-6a7e045e4e3c-utilities\") pod \"certified-operators-5754d\" (UID: \"f600095c-dcb6-440d-8f24-6a7e045e4e3c\") " pod="openshift-marketplace/certified-operators-5754d" Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.725950 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e946d2c7-ef68-4660-83c1-57261d7a5e86-utilities\") pod \"redhat-marketplace-hxgtr\" (UID: \"e946d2c7-ef68-4660-83c1-57261d7a5e86\") " pod="openshift-marketplace/redhat-marketplace-hxgtr" Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.759701 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mtnv\" (UniqueName: \"kubernetes.io/projected/e946d2c7-ef68-4660-83c1-57261d7a5e86-kube-api-access-7mtnv\") pod \"redhat-marketplace-hxgtr\" (UID: \"e946d2c7-ef68-4660-83c1-57261d7a5e86\") " pod="openshift-marketplace/redhat-marketplace-hxgtr" Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.772139 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qztr\" (UniqueName: \"kubernetes.io/projected/f600095c-dcb6-440d-8f24-6a7e045e4e3c-kube-api-access-8qztr\") pod \"certified-operators-5754d\" (UID: \"f600095c-dcb6-440d-8f24-6a7e045e4e3c\") " pod="openshift-marketplace/certified-operators-5754d" Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.886732 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hxgtr" Dec 12 16:20:25 crc kubenswrapper[4693]: I1212 16:20:25.919663 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5754d" Dec 12 16:20:26 crc kubenswrapper[4693]: I1212 16:20:26.542190 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hxgtr"] Dec 12 16:20:26 crc kubenswrapper[4693]: W1212 16:20:26.657488 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf600095c_dcb6_440d_8f24_6a7e045e4e3c.slice/crio-c4c9b34201bfd612b8c53be287821c1245d4cef9d86bab76fd60098e45527653 WatchSource:0}: Error finding container c4c9b34201bfd612b8c53be287821c1245d4cef9d86bab76fd60098e45527653: Status 404 returned error can't find the container with id c4c9b34201bfd612b8c53be287821c1245d4cef9d86bab76fd60098e45527653 Dec 12 16:20:26 crc kubenswrapper[4693]: I1212 16:20:26.657978 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5754d"] Dec 12 16:20:27 crc kubenswrapper[4693]: I1212 16:20:27.017027 4693 generic.go:334] "Generic (PLEG): container finished" podID="f600095c-dcb6-440d-8f24-6a7e045e4e3c" containerID="5f9edb429a239549374d7cfaab1535eb9d7ce0aae00c699b563fe32518234e42" exitCode=0 Dec 12 16:20:27 crc kubenswrapper[4693]: I1212 16:20:27.017139 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5754d" event={"ID":"f600095c-dcb6-440d-8f24-6a7e045e4e3c","Type":"ContainerDied","Data":"5f9edb429a239549374d7cfaab1535eb9d7ce0aae00c699b563fe32518234e42"} Dec 12 16:20:27 crc kubenswrapper[4693]: I1212 16:20:27.017515 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5754d" event={"ID":"f600095c-dcb6-440d-8f24-6a7e045e4e3c","Type":"ContainerStarted","Data":"c4c9b34201bfd612b8c53be287821c1245d4cef9d86bab76fd60098e45527653"} Dec 12 16:20:27 crc kubenswrapper[4693]: I1212 16:20:27.019459 4693 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 16:20:27 crc kubenswrapper[4693]: I1212 16:20:27.019787 4693 generic.go:334] "Generic (PLEG): container finished" podID="e946d2c7-ef68-4660-83c1-57261d7a5e86" containerID="16875eaa1bf3a2fa1c78ba049d253ee0dc0b233a63a80500f5ac96c3cf646051" exitCode=0 Dec 12 16:20:27 crc kubenswrapper[4693]: I1212 16:20:27.019845 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hxgtr" event={"ID":"e946d2c7-ef68-4660-83c1-57261d7a5e86","Type":"ContainerDied","Data":"16875eaa1bf3a2fa1c78ba049d253ee0dc0b233a63a80500f5ac96c3cf646051"} Dec 12 16:20:27 crc kubenswrapper[4693]: I1212 16:20:27.019885 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hxgtr" event={"ID":"e946d2c7-ef68-4660-83c1-57261d7a5e86","Type":"ContainerStarted","Data":"ffe3e1259d22ecbe0e5d98acbe2f78433929588eb8dfec1cf14f08786e8b57fe"} Dec 12 16:20:27 crc kubenswrapper[4693]: I1212 16:20:27.570621 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f9wdt"] Dec 12 16:20:27 crc kubenswrapper[4693]: I1212 16:20:27.573879 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9wdt" Dec 12 16:20:27 crc kubenswrapper[4693]: I1212 16:20:27.609371 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9wdt"] Dec 12 16:20:27 crc kubenswrapper[4693]: I1212 16:20:27.712661 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9aa40a-9560-4c6e-81b4-e0960a88c322-utilities\") pod \"redhat-operators-f9wdt\" (UID: \"fd9aa40a-9560-4c6e-81b4-e0960a88c322\") " pod="openshift-marketplace/redhat-operators-f9wdt" Dec 12 16:20:27 crc kubenswrapper[4693]: I1212 16:20:27.712736 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n2sb\" (UniqueName: \"kubernetes.io/projected/fd9aa40a-9560-4c6e-81b4-e0960a88c322-kube-api-access-6n2sb\") pod \"redhat-operators-f9wdt\" (UID: \"fd9aa40a-9560-4c6e-81b4-e0960a88c322\") " pod="openshift-marketplace/redhat-operators-f9wdt" Dec 12 16:20:27 crc kubenswrapper[4693]: I1212 16:20:27.712916 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9aa40a-9560-4c6e-81b4-e0960a88c322-catalog-content\") pod \"redhat-operators-f9wdt\" (UID: \"fd9aa40a-9560-4c6e-81b4-e0960a88c322\") " pod="openshift-marketplace/redhat-operators-f9wdt" Dec 12 16:20:27 crc kubenswrapper[4693]: I1212 16:20:27.821102 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9aa40a-9560-4c6e-81b4-e0960a88c322-catalog-content\") pod \"redhat-operators-f9wdt\" (UID: \"fd9aa40a-9560-4c6e-81b4-e0960a88c322\") " pod="openshift-marketplace/redhat-operators-f9wdt" Dec 12 16:20:27 crc kubenswrapper[4693]: I1212 16:20:27.821194 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9aa40a-9560-4c6e-81b4-e0960a88c322-utilities\") pod \"redhat-operators-f9wdt\" (UID: \"fd9aa40a-9560-4c6e-81b4-e0960a88c322\") " pod="openshift-marketplace/redhat-operators-f9wdt" Dec 12 16:20:27 crc kubenswrapper[4693]: I1212 16:20:27.821243 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n2sb\" (UniqueName: \"kubernetes.io/projected/fd9aa40a-9560-4c6e-81b4-e0960a88c322-kube-api-access-6n2sb\") pod \"redhat-operators-f9wdt\" (UID: \"fd9aa40a-9560-4c6e-81b4-e0960a88c322\") " pod="openshift-marketplace/redhat-operators-f9wdt" Dec 12 16:20:27 crc kubenswrapper[4693]: I1212 16:20:27.822169 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9aa40a-9560-4c6e-81b4-e0960a88c322-catalog-content\") pod \"redhat-operators-f9wdt\" (UID: \"fd9aa40a-9560-4c6e-81b4-e0960a88c322\") " pod="openshift-marketplace/redhat-operators-f9wdt" Dec 12 16:20:27 crc kubenswrapper[4693]: I1212 16:20:27.822404 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9aa40a-9560-4c6e-81b4-e0960a88c322-utilities\") pod \"redhat-operators-f9wdt\" (UID: \"fd9aa40a-9560-4c6e-81b4-e0960a88c322\") " pod="openshift-marketplace/redhat-operators-f9wdt" Dec 12 16:20:27 crc kubenswrapper[4693]: I1212 16:20:27.850195 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n2sb\" (UniqueName: \"kubernetes.io/projected/fd9aa40a-9560-4c6e-81b4-e0960a88c322-kube-api-access-6n2sb\") pod \"redhat-operators-f9wdt\" (UID: \"fd9aa40a-9560-4c6e-81b4-e0960a88c322\") " pod="openshift-marketplace/redhat-operators-f9wdt" Dec 12 16:20:27 crc kubenswrapper[4693]: I1212 16:20:27.933823 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9wdt" Dec 12 16:20:28 crc kubenswrapper[4693]: I1212 16:20:28.571966 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9wdt"] Dec 12 16:20:29 crc kubenswrapper[4693]: I1212 16:20:29.053395 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hxgtr" event={"ID":"e946d2c7-ef68-4660-83c1-57261d7a5e86","Type":"ContainerStarted","Data":"f5077984fd03e00a0d19a47a9ecc9867541a5cbe83ea736157e8cbeca3303eb7"} Dec 12 16:20:29 crc kubenswrapper[4693]: I1212 16:20:29.054715 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9wdt" event={"ID":"fd9aa40a-9560-4c6e-81b4-e0960a88c322","Type":"ContainerStarted","Data":"2d8c1de10651879062e591849bd9137a0ebb7292d6dd4e827958ccd8ea2451e5"} Dec 12 16:20:29 crc kubenswrapper[4693]: I1212 16:20:29.056711 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5754d" event={"ID":"f600095c-dcb6-440d-8f24-6a7e045e4e3c","Type":"ContainerStarted","Data":"cd530a16d79abaee299bc30a4d9b992287f9b967d712526f78d5b334f22bc23f"} Dec 12 16:20:30 crc kubenswrapper[4693]: I1212 16:20:30.093772 4693 generic.go:334] "Generic (PLEG): container finished" podID="fd9aa40a-9560-4c6e-81b4-e0960a88c322" containerID="003fbff7ea99510f23714d66abac451add8e2dfb942b7bd3bd6ca9d274939a0d" exitCode=0 Dec 12 16:20:30 crc kubenswrapper[4693]: I1212 16:20:30.093900 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9wdt" event={"ID":"fd9aa40a-9560-4c6e-81b4-e0960a88c322","Type":"ContainerDied","Data":"003fbff7ea99510f23714d66abac451add8e2dfb942b7bd3bd6ca9d274939a0d"} Dec 12 16:20:30 crc kubenswrapper[4693]: I1212 16:20:30.097190 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hxgtr" event={"ID":"e946d2c7-ef68-4660-83c1-57261d7a5e86","Type":"ContainerDied","Data":"f5077984fd03e00a0d19a47a9ecc9867541a5cbe83ea736157e8cbeca3303eb7"} Dec 12 16:20:30 crc kubenswrapper[4693]: I1212 16:20:30.097053 4693 generic.go:334] "Generic (PLEG): container finished" podID="e946d2c7-ef68-4660-83c1-57261d7a5e86" containerID="f5077984fd03e00a0d19a47a9ecc9867541a5cbe83ea736157e8cbeca3303eb7" exitCode=0 Dec 12 16:20:31 crc kubenswrapper[4693]: I1212 16:20:31.125027 4693 generic.go:334] "Generic (PLEG): container finished" podID="f600095c-dcb6-440d-8f24-6a7e045e4e3c" containerID="cd530a16d79abaee299bc30a4d9b992287f9b967d712526f78d5b334f22bc23f" exitCode=0 Dec 12 16:20:31 crc kubenswrapper[4693]: I1212 16:20:31.125123 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5754d" event={"ID":"f600095c-dcb6-440d-8f24-6a7e045e4e3c","Type":"ContainerDied","Data":"cd530a16d79abaee299bc30a4d9b992287f9b967d712526f78d5b334f22bc23f"} Dec 12 16:20:32 crc kubenswrapper[4693]: I1212 16:20:32.143614 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9wdt" event={"ID":"fd9aa40a-9560-4c6e-81b4-e0960a88c322","Type":"ContainerStarted","Data":"030a52df540930c7cad38dc22b343b76124a749411ed09fe4c10e5237652b5ef"} Dec 12 16:20:32 crc kubenswrapper[4693]: I1212 16:20:32.150898 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hxgtr" event={"ID":"e946d2c7-ef68-4660-83c1-57261d7a5e86","Type":"ContainerStarted","Data":"347a1ed5acec3f0461ccc18518d757a7be334952b4dede2dfec6be49ad2e0209"} Dec 12 16:20:32 crc kubenswrapper[4693]: I1212 16:20:32.218160 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hxgtr" podStartSLOduration=2.480849161 podStartE2EDuration="7.218133262s" podCreationTimestamp="2025-12-12 16:20:25 +0000 UTC" firstStartedPulling="2025-12-12 16:20:27.021868947 +0000 UTC m=+2054.190508538" lastFinishedPulling="2025-12-12 16:20:31.759153018 +0000 UTC m=+2058.927792639" observedRunningTime="2025-12-12 16:20:32.211202533 +0000 UTC m=+2059.379842154" watchObservedRunningTime="2025-12-12 16:20:32.218133262 +0000 UTC m=+2059.386772883" Dec 12 16:20:33 crc kubenswrapper[4693]: I1212 16:20:33.166718 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5754d" event={"ID":"f600095c-dcb6-440d-8f24-6a7e045e4e3c","Type":"ContainerStarted","Data":"4482ccbfa092c93e72c2c62488bbcb62acb4a61c0eefdc70c2e68a25f9a51629"} Dec 12 16:20:33 crc kubenswrapper[4693]: I1212 16:20:33.213343 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5754d" podStartSLOduration=3.1233541320000002 podStartE2EDuration="8.213315147s" podCreationTimestamp="2025-12-12 16:20:25 +0000 UTC" firstStartedPulling="2025-12-12 16:20:27.019157463 +0000 UTC m=+2054.187797064" lastFinishedPulling="2025-12-12 16:20:32.109118438 +0000 UTC m=+2059.277758079" observedRunningTime="2025-12-12 16:20:33.204581899 +0000 UTC m=+2060.373221500" watchObservedRunningTime="2025-12-12 16:20:33.213315147 +0000 UTC m=+2060.381954748" Dec 12 16:20:35 crc kubenswrapper[4693]: I1212 16:20:35.888180 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hxgtr" Dec 12 16:20:35 crc kubenswrapper[4693]: I1212 16:20:35.888735 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hxgtr" Dec 12 16:20:35 crc kubenswrapper[4693]: I1212 16:20:35.920307 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5754d" Dec 12 16:20:35 crc kubenswrapper[4693]: I1212 16:20:35.920567 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5754d" Dec 12 16:20:37 crc kubenswrapper[4693]: I1212 16:20:37.312311 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-5754d" podUID="f600095c-dcb6-440d-8f24-6a7e045e4e3c" containerName="registry-server" probeResult="failure" output=< Dec 12 16:20:37 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 16:20:37 crc kubenswrapper[4693]: > Dec 12 16:20:37 crc kubenswrapper[4693]: I1212 16:20:37.323064 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-hxgtr" podUID="e946d2c7-ef68-4660-83c1-57261d7a5e86" containerName="registry-server" probeResult="failure" output=< Dec 12 16:20:37 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 16:20:37 crc kubenswrapper[4693]: > Dec 12 16:20:40 crc kubenswrapper[4693]: I1212 16:20:40.248344 4693 generic.go:334] "Generic (PLEG): container finished" podID="fd9aa40a-9560-4c6e-81b4-e0960a88c322" containerID="030a52df540930c7cad38dc22b343b76124a749411ed09fe4c10e5237652b5ef" exitCode=0 Dec 12 16:20:40 crc kubenswrapper[4693]: I1212 16:20:40.248466 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9wdt" event={"ID":"fd9aa40a-9560-4c6e-81b4-e0960a88c322","Type":"ContainerDied","Data":"030a52df540930c7cad38dc22b343b76124a749411ed09fe4c10e5237652b5ef"} Dec 12 16:20:42 crc kubenswrapper[4693]: I1212 16:20:42.269028 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9wdt" event={"ID":"fd9aa40a-9560-4c6e-81b4-e0960a88c322","Type":"ContainerStarted","Data":"2e25ccc9be9c953502bdce81fc45342dbd06116cc00485fe1a9d7ae93fb06d9e"} Dec 12 16:20:42 crc kubenswrapper[4693]: I1212 16:20:42.300192 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f9wdt" podStartSLOduration=4.622237922 podStartE2EDuration="15.300170249s" podCreationTimestamp="2025-12-12 16:20:27 +0000 UTC" firstStartedPulling="2025-12-12 16:20:30.096218034 +0000 UTC m=+2057.264857635" lastFinishedPulling="2025-12-12 16:20:40.774150361 +0000 UTC m=+2067.942789962" observedRunningTime="2025-12-12 16:20:42.287199935 +0000 UTC m=+2069.455839546" watchObservedRunningTime="2025-12-12 16:20:42.300170249 +0000 UTC m=+2069.468809850" Dec 12 16:20:42 crc kubenswrapper[4693]: I1212 16:20:42.530182 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:20:42 crc kubenswrapper[4693]: I1212 16:20:42.530778 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:20:45 crc kubenswrapper[4693]: I1212 16:20:45.955041 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hxgtr" Dec 12 16:20:45 crc kubenswrapper[4693]: I1212 16:20:45.988943 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5754d" Dec 12 16:20:46 crc kubenswrapper[4693]: I1212 16:20:46.040595 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hxgtr" Dec 12 16:20:46 crc kubenswrapper[4693]: I1212 16:20:46.077379 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5754d" Dec 12 16:20:47 crc kubenswrapper[4693]: I1212 16:20:47.934259 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f9wdt" Dec 12 16:20:47 crc kubenswrapper[4693]: I1212 16:20:47.934853 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f9wdt" Dec 12 16:20:48 crc kubenswrapper[4693]: I1212 16:20:48.200193 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hxgtr"] Dec 12 16:20:48 crc kubenswrapper[4693]: I1212 16:20:48.200507 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hxgtr" podUID="e946d2c7-ef68-4660-83c1-57261d7a5e86" containerName="registry-server" containerID="cri-o://347a1ed5acec3f0461ccc18518d757a7be334952b4dede2dfec6be49ad2e0209" gracePeriod=2 Dec 12 16:20:48 crc kubenswrapper[4693]: I1212 16:20:48.375355 4693 generic.go:334] "Generic (PLEG): container finished" podID="e946d2c7-ef68-4660-83c1-57261d7a5e86" containerID="347a1ed5acec3f0461ccc18518d757a7be334952b4dede2dfec6be49ad2e0209" exitCode=0 Dec 12 16:20:48 crc kubenswrapper[4693]: I1212 16:20:48.375597 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hxgtr" event={"ID":"e946d2c7-ef68-4660-83c1-57261d7a5e86","Type":"ContainerDied","Data":"347a1ed5acec3f0461ccc18518d757a7be334952b4dede2dfec6be49ad2e0209"} Dec 12 16:20:48 crc kubenswrapper[4693]: I1212 16:20:48.448290 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5754d"] Dec 12 16:20:48 crc kubenswrapper[4693]: I1212 16:20:48.449377 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5754d" podUID="f600095c-dcb6-440d-8f24-6a7e045e4e3c" containerName="registry-server" containerID="cri-o://4482ccbfa092c93e72c2c62488bbcb62acb4a61c0eefdc70c2e68a25f9a51629" gracePeriod=2 Dec 12 16:20:48 crc kubenswrapper[4693]: I1212 16:20:48.806107 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hxgtr" Dec 12 16:20:48 crc kubenswrapper[4693]: I1212 16:20:48.907240 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mtnv\" (UniqueName: \"kubernetes.io/projected/e946d2c7-ef68-4660-83c1-57261d7a5e86-kube-api-access-7mtnv\") pod \"e946d2c7-ef68-4660-83c1-57261d7a5e86\" (UID: \"e946d2c7-ef68-4660-83c1-57261d7a5e86\") " Dec 12 16:20:48 crc kubenswrapper[4693]: I1212 16:20:48.907538 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e946d2c7-ef68-4660-83c1-57261d7a5e86-catalog-content\") pod \"e946d2c7-ef68-4660-83c1-57261d7a5e86\" (UID: \"e946d2c7-ef68-4660-83c1-57261d7a5e86\") " Dec 12 16:20:48 crc kubenswrapper[4693]: I1212 16:20:48.907627 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e946d2c7-ef68-4660-83c1-57261d7a5e86-utilities\") pod \"e946d2c7-ef68-4660-83c1-57261d7a5e86\" (UID: \"e946d2c7-ef68-4660-83c1-57261d7a5e86\") " Dec 12 16:20:48 crc kubenswrapper[4693]: I1212 16:20:48.908635 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e946d2c7-ef68-4660-83c1-57261d7a5e86-utilities" (OuterVolumeSpecName: "utilities") pod "e946d2c7-ef68-4660-83c1-57261d7a5e86" (UID: "e946d2c7-ef68-4660-83c1-57261d7a5e86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:20:48 crc kubenswrapper[4693]: I1212 16:20:48.913148 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e946d2c7-ef68-4660-83c1-57261d7a5e86-kube-api-access-7mtnv" (OuterVolumeSpecName: "kube-api-access-7mtnv") pod "e946d2c7-ef68-4660-83c1-57261d7a5e86" (UID: "e946d2c7-ef68-4660-83c1-57261d7a5e86"). InnerVolumeSpecName "kube-api-access-7mtnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:20:48 crc kubenswrapper[4693]: I1212 16:20:48.929506 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e946d2c7-ef68-4660-83c1-57261d7a5e86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e946d2c7-ef68-4660-83c1-57261d7a5e86" (UID: "e946d2c7-ef68-4660-83c1-57261d7a5e86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:20:48 crc kubenswrapper[4693]: I1212 16:20:48.987099 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f9wdt" podUID="fd9aa40a-9560-4c6e-81b4-e0960a88c322" containerName="registry-server" probeResult="failure" output=< Dec 12 16:20:48 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 16:20:48 crc kubenswrapper[4693]: > Dec 12 16:20:48 crc kubenswrapper[4693]: I1212 16:20:48.987308 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5754d" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.009428 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f600095c-dcb6-440d-8f24-6a7e045e4e3c-utilities\") pod \"f600095c-dcb6-440d-8f24-6a7e045e4e3c\" (UID: \"f600095c-dcb6-440d-8f24-6a7e045e4e3c\") " Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.009519 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qztr\" (UniqueName: \"kubernetes.io/projected/f600095c-dcb6-440d-8f24-6a7e045e4e3c-kube-api-access-8qztr\") pod \"f600095c-dcb6-440d-8f24-6a7e045e4e3c\" (UID: \"f600095c-dcb6-440d-8f24-6a7e045e4e3c\") " Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.009714 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f600095c-dcb6-440d-8f24-6a7e045e4e3c-catalog-content\") pod \"f600095c-dcb6-440d-8f24-6a7e045e4e3c\" (UID: \"f600095c-dcb6-440d-8f24-6a7e045e4e3c\") " Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.010263 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f600095c-dcb6-440d-8f24-6a7e045e4e3c-utilities" (OuterVolumeSpecName: "utilities") pod "f600095c-dcb6-440d-8f24-6a7e045e4e3c" (UID: "f600095c-dcb6-440d-8f24-6a7e045e4e3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.010920 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f600095c-dcb6-440d-8f24-6a7e045e4e3c-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.010946 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e946d2c7-ef68-4660-83c1-57261d7a5e86-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.010963 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mtnv\" (UniqueName: \"kubernetes.io/projected/e946d2c7-ef68-4660-83c1-57261d7a5e86-kube-api-access-7mtnv\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.010977 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e946d2c7-ef68-4660-83c1-57261d7a5e86-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.013502 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f600095c-dcb6-440d-8f24-6a7e045e4e3c-kube-api-access-8qztr" (OuterVolumeSpecName: "kube-api-access-8qztr") pod "f600095c-dcb6-440d-8f24-6a7e045e4e3c" (UID: "f600095c-dcb6-440d-8f24-6a7e045e4e3c"). InnerVolumeSpecName "kube-api-access-8qztr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.085077 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f600095c-dcb6-440d-8f24-6a7e045e4e3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f600095c-dcb6-440d-8f24-6a7e045e4e3c" (UID: "f600095c-dcb6-440d-8f24-6a7e045e4e3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.115088 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f600095c-dcb6-440d-8f24-6a7e045e4e3c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.115132 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qztr\" (UniqueName: \"kubernetes.io/projected/f600095c-dcb6-440d-8f24-6a7e045e4e3c-kube-api-access-8qztr\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.390179 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hxgtr" event={"ID":"e946d2c7-ef68-4660-83c1-57261d7a5e86","Type":"ContainerDied","Data":"ffe3e1259d22ecbe0e5d98acbe2f78433929588eb8dfec1cf14f08786e8b57fe"} Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.390207 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hxgtr" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.390518 4693 scope.go:117] "RemoveContainer" containerID="347a1ed5acec3f0461ccc18518d757a7be334952b4dede2dfec6be49ad2e0209" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.394607 4693 generic.go:334] "Generic (PLEG): container finished" podID="f600095c-dcb6-440d-8f24-6a7e045e4e3c" containerID="4482ccbfa092c93e72c2c62488bbcb62acb4a61c0eefdc70c2e68a25f9a51629" exitCode=0 Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.394685 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5754d" event={"ID":"f600095c-dcb6-440d-8f24-6a7e045e4e3c","Type":"ContainerDied","Data":"4482ccbfa092c93e72c2c62488bbcb62acb4a61c0eefdc70c2e68a25f9a51629"} Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.394708 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5754d" event={"ID":"f600095c-dcb6-440d-8f24-6a7e045e4e3c","Type":"ContainerDied","Data":"c4c9b34201bfd612b8c53be287821c1245d4cef9d86bab76fd60098e45527653"} Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.394763 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5754d" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.427417 4693 scope.go:117] "RemoveContainer" containerID="f5077984fd03e00a0d19a47a9ecc9867541a5cbe83ea736157e8cbeca3303eb7" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.433022 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hxgtr"] Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.449727 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hxgtr"] Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.455912 4693 scope.go:117] "RemoveContainer" containerID="16875eaa1bf3a2fa1c78ba049d253ee0dc0b233a63a80500f5ac96c3cf646051" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.460402 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5754d"] Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.472451 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5754d"] Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.525742 4693 scope.go:117] "RemoveContainer" containerID="4482ccbfa092c93e72c2c62488bbcb62acb4a61c0eefdc70c2e68a25f9a51629" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.586426 4693 scope.go:117] "RemoveContainer" containerID="cd530a16d79abaee299bc30a4d9b992287f9b967d712526f78d5b334f22bc23f" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.618869 4693 scope.go:117] "RemoveContainer" containerID="5f9edb429a239549374d7cfaab1535eb9d7ce0aae00c699b563fe32518234e42" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.691640 4693 scope.go:117] "RemoveContainer" containerID="4482ccbfa092c93e72c2c62488bbcb62acb4a61c0eefdc70c2e68a25f9a51629" Dec 12 16:20:49 crc kubenswrapper[4693]: E1212 16:20:49.692171 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4482ccbfa092c93e72c2c62488bbcb62acb4a61c0eefdc70c2e68a25f9a51629\": container with ID starting with 4482ccbfa092c93e72c2c62488bbcb62acb4a61c0eefdc70c2e68a25f9a51629 not found: ID does not exist" containerID="4482ccbfa092c93e72c2c62488bbcb62acb4a61c0eefdc70c2e68a25f9a51629" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.692227 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4482ccbfa092c93e72c2c62488bbcb62acb4a61c0eefdc70c2e68a25f9a51629"} err="failed to get container status \"4482ccbfa092c93e72c2c62488bbcb62acb4a61c0eefdc70c2e68a25f9a51629\": rpc error: code = NotFound desc = could not find container \"4482ccbfa092c93e72c2c62488bbcb62acb4a61c0eefdc70c2e68a25f9a51629\": container with ID starting with 4482ccbfa092c93e72c2c62488bbcb62acb4a61c0eefdc70c2e68a25f9a51629 not found: ID does not exist" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.692260 4693 scope.go:117] "RemoveContainer" containerID="cd530a16d79abaee299bc30a4d9b992287f9b967d712526f78d5b334f22bc23f" Dec 12 16:20:49 crc kubenswrapper[4693]: E1212 16:20:49.692683 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd530a16d79abaee299bc30a4d9b992287f9b967d712526f78d5b334f22bc23f\": container with ID starting with cd530a16d79abaee299bc30a4d9b992287f9b967d712526f78d5b334f22bc23f not found: ID does not exist" containerID="cd530a16d79abaee299bc30a4d9b992287f9b967d712526f78d5b334f22bc23f" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.692723 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd530a16d79abaee299bc30a4d9b992287f9b967d712526f78d5b334f22bc23f"} err="failed to get container status \"cd530a16d79abaee299bc30a4d9b992287f9b967d712526f78d5b334f22bc23f\": rpc error: code = NotFound desc = could not find container \"cd530a16d79abaee299bc30a4d9b992287f9b967d712526f78d5b334f22bc23f\": container with ID starting with cd530a16d79abaee299bc30a4d9b992287f9b967d712526f78d5b334f22bc23f not found: ID does not exist" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.692750 4693 scope.go:117] "RemoveContainer" containerID="5f9edb429a239549374d7cfaab1535eb9d7ce0aae00c699b563fe32518234e42" Dec 12 16:20:49 crc kubenswrapper[4693]: E1212 16:20:49.693009 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f9edb429a239549374d7cfaab1535eb9d7ce0aae00c699b563fe32518234e42\": container with ID starting with 5f9edb429a239549374d7cfaab1535eb9d7ce0aae00c699b563fe32518234e42 not found: ID does not exist" containerID="5f9edb429a239549374d7cfaab1535eb9d7ce0aae00c699b563fe32518234e42" Dec 12 16:20:49 crc kubenswrapper[4693]: I1212 16:20:49.693039 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f9edb429a239549374d7cfaab1535eb9d7ce0aae00c699b563fe32518234e42"} err="failed to get container status \"5f9edb429a239549374d7cfaab1535eb9d7ce0aae00c699b563fe32518234e42\": rpc error: code = NotFound desc = could not find container \"5f9edb429a239549374d7cfaab1535eb9d7ce0aae00c699b563fe32518234e42\": container with ID starting with 5f9edb429a239549374d7cfaab1535eb9d7ce0aae00c699b563fe32518234e42 not found: ID does not exist" Dec 12 16:20:51 crc kubenswrapper[4693]: I1212 16:20:51.373723 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e946d2c7-ef68-4660-83c1-57261d7a5e86" path="/var/lib/kubelet/pods/e946d2c7-ef68-4660-83c1-57261d7a5e86/volumes" Dec 12 16:20:51 crc kubenswrapper[4693]: I1212 16:20:51.374834 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f600095c-dcb6-440d-8f24-6a7e045e4e3c" path="/var/lib/kubelet/pods/f600095c-dcb6-440d-8f24-6a7e045e4e3c/volumes" Dec 12 16:20:57 crc kubenswrapper[4693]: I1212 16:20:57.186375 4693 scope.go:117] "RemoveContainer" containerID="998bb426726973b9aa11bb36c0a273f2e860a00db6800e9bfb59edd73a3302a1" Dec 12 16:20:57 crc kubenswrapper[4693]: I1212 16:20:57.996174 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f9wdt" Dec 12 16:20:58 crc kubenswrapper[4693]: I1212 16:20:58.070320 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f9wdt" Dec 12 16:20:58 crc kubenswrapper[4693]: I1212 16:20:58.233975 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f9wdt"] Dec 12 16:20:59 crc kubenswrapper[4693]: I1212 16:20:59.517844 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f9wdt" podUID="fd9aa40a-9560-4c6e-81b4-e0960a88c322" containerName="registry-server" containerID="cri-o://2e25ccc9be9c953502bdce81fc45342dbd06116cc00485fe1a9d7ae93fb06d9e" gracePeriod=2 Dec 12 16:21:00 crc kubenswrapper[4693]: I1212 16:21:00.540627 4693 generic.go:334] "Generic (PLEG): container finished" podID="fd9aa40a-9560-4c6e-81b4-e0960a88c322" containerID="2e25ccc9be9c953502bdce81fc45342dbd06116cc00485fe1a9d7ae93fb06d9e" exitCode=0 Dec 12 16:21:00 crc kubenswrapper[4693]: I1212 16:21:00.541005 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9wdt" event={"ID":"fd9aa40a-9560-4c6e-81b4-e0960a88c322","Type":"ContainerDied","Data":"2e25ccc9be9c953502bdce81fc45342dbd06116cc00485fe1a9d7ae93fb06d9e"} Dec 12 16:21:01 crc kubenswrapper[4693]: I1212 16:21:01.147601 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9wdt" Dec 12 16:21:01 crc kubenswrapper[4693]: I1212 16:21:01.304636 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9aa40a-9560-4c6e-81b4-e0960a88c322-utilities\") pod \"fd9aa40a-9560-4c6e-81b4-e0960a88c322\" (UID: \"fd9aa40a-9560-4c6e-81b4-e0960a88c322\") " Dec 12 16:21:01 crc kubenswrapper[4693]: I1212 16:21:01.304708 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n2sb\" (UniqueName: \"kubernetes.io/projected/fd9aa40a-9560-4c6e-81b4-e0960a88c322-kube-api-access-6n2sb\") pod \"fd9aa40a-9560-4c6e-81b4-e0960a88c322\" (UID: \"fd9aa40a-9560-4c6e-81b4-e0960a88c322\") " Dec 12 16:21:01 crc kubenswrapper[4693]: I1212 16:21:01.304812 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9aa40a-9560-4c6e-81b4-e0960a88c322-catalog-content\") pod \"fd9aa40a-9560-4c6e-81b4-e0960a88c322\" (UID: \"fd9aa40a-9560-4c6e-81b4-e0960a88c322\") " Dec 12 16:21:01 crc kubenswrapper[4693]: I1212 16:21:01.305639 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd9aa40a-9560-4c6e-81b4-e0960a88c322-utilities" (OuterVolumeSpecName: "utilities") pod "fd9aa40a-9560-4c6e-81b4-e0960a88c322" (UID: "fd9aa40a-9560-4c6e-81b4-e0960a88c322"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:21:01 crc kubenswrapper[4693]: I1212 16:21:01.315523 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd9aa40a-9560-4c6e-81b4-e0960a88c322-kube-api-access-6n2sb" (OuterVolumeSpecName: "kube-api-access-6n2sb") pod "fd9aa40a-9560-4c6e-81b4-e0960a88c322" (UID: "fd9aa40a-9560-4c6e-81b4-e0960a88c322"). InnerVolumeSpecName "kube-api-access-6n2sb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:21:01 crc kubenswrapper[4693]: I1212 16:21:01.407660 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9aa40a-9560-4c6e-81b4-e0960a88c322-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:21:01 crc kubenswrapper[4693]: I1212 16:21:01.407701 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n2sb\" (UniqueName: \"kubernetes.io/projected/fd9aa40a-9560-4c6e-81b4-e0960a88c322-kube-api-access-6n2sb\") on node \"crc\" DevicePath \"\"" Dec 12 16:21:01 crc kubenswrapper[4693]: I1212 16:21:01.437281 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd9aa40a-9560-4c6e-81b4-e0960a88c322-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd9aa40a-9560-4c6e-81b4-e0960a88c322" (UID: "fd9aa40a-9560-4c6e-81b4-e0960a88c322"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:21:01 crc kubenswrapper[4693]: I1212 16:21:01.510148 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9aa40a-9560-4c6e-81b4-e0960a88c322-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:21:01 crc kubenswrapper[4693]: I1212 16:21:01.559305 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9wdt" event={"ID":"fd9aa40a-9560-4c6e-81b4-e0960a88c322","Type":"ContainerDied","Data":"2d8c1de10651879062e591849bd9137a0ebb7292d6dd4e827958ccd8ea2451e5"} Dec 12 16:21:01 crc kubenswrapper[4693]: I1212 16:21:01.559379 4693 scope.go:117] "RemoveContainer" containerID="2e25ccc9be9c953502bdce81fc45342dbd06116cc00485fe1a9d7ae93fb06d9e" Dec 12 16:21:01 crc kubenswrapper[4693]: I1212 16:21:01.559482 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9wdt" Dec 12 16:21:01 crc kubenswrapper[4693]: I1212 16:21:01.605027 4693 scope.go:117] "RemoveContainer" containerID="030a52df540930c7cad38dc22b343b76124a749411ed09fe4c10e5237652b5ef" Dec 12 16:21:01 crc kubenswrapper[4693]: I1212 16:21:01.619230 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f9wdt"] Dec 12 16:21:01 crc kubenswrapper[4693]: I1212 16:21:01.633012 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f9wdt"] Dec 12 16:21:01 crc kubenswrapper[4693]: I1212 16:21:01.638283 4693 scope.go:117] "RemoveContainer" containerID="003fbff7ea99510f23714d66abac451add8e2dfb942b7bd3bd6ca9d274939a0d" Dec 12 16:21:03 crc kubenswrapper[4693]: I1212 16:21:03.385601 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd9aa40a-9560-4c6e-81b4-e0960a88c322" path="/var/lib/kubelet/pods/fd9aa40a-9560-4c6e-81b4-e0960a88c322/volumes" Dec 12 16:21:10 crc kubenswrapper[4693]: I1212 16:21:10.044962 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-8580-account-create-update-nqj68"] Dec 12 16:21:10 crc kubenswrapper[4693]: I1212 16:21:10.057573 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-cwzlg"] Dec 12 16:21:10 crc kubenswrapper[4693]: I1212 16:21:10.068251 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-8580-account-create-update-nqj68"] Dec 12 16:21:10 crc kubenswrapper[4693]: I1212 16:21:10.079084 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-cwzlg"] Dec 12 16:21:11 crc kubenswrapper[4693]: I1212 16:21:11.374791 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08f2fcab-af71-4ce3-b895-b65b0fea7736" path="/var/lib/kubelet/pods/08f2fcab-af71-4ce3-b895-b65b0fea7736/volumes" Dec 12 16:21:11 crc kubenswrapper[4693]: I1212 16:21:11.379497 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eab5418-24ec-488e-a57f-f35b117fa044" path="/var/lib/kubelet/pods/4eab5418-24ec-488e-a57f-f35b117fa044/volumes" Dec 12 16:21:12 crc kubenswrapper[4693]: I1212 16:21:12.530853 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:21:12 crc kubenswrapper[4693]: I1212 16:21:12.531244 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:21:18 crc kubenswrapper[4693]: I1212 16:21:18.033413 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-mbsvw"] Dec 12 16:21:18 crc kubenswrapper[4693]: I1212 16:21:18.048406 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-mbsvw"] Dec 12 16:21:19 crc kubenswrapper[4693]: I1212 16:21:19.040574 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-nrjn8"] Dec 12 16:21:19 crc kubenswrapper[4693]: I1212 16:21:19.055747 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-kwhwr"] Dec 12 16:21:19 crc kubenswrapper[4693]: I1212 16:21:19.069548 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-nrjn8"] Dec 12 16:21:19 crc kubenswrapper[4693]: I1212 16:21:19.083337 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-kwhwr"] Dec 12 16:21:19 crc kubenswrapper[4693]: I1212 16:21:19.373096 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b334154-7111-4b39-b0fc-ffb79a331506" path="/var/lib/kubelet/pods/3b334154-7111-4b39-b0fc-ffb79a331506/volumes" Dec 12 16:21:19 crc kubenswrapper[4693]: I1212 16:21:19.374380 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba5f3ece-76c3-42f5-ae1f-b47727c5217c" path="/var/lib/kubelet/pods/ba5f3ece-76c3-42f5-ae1f-b47727c5217c/volumes" Dec 12 16:21:19 crc kubenswrapper[4693]: I1212 16:21:19.375634 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be946e65-78d9-4dd1-9a59-a290c6ae0f76" path="/var/lib/kubelet/pods/be946e65-78d9-4dd1-9a59-a290c6ae0f76/volumes" Dec 12 16:21:20 crc kubenswrapper[4693]: I1212 16:21:20.032950 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-74dd-account-create-update-5h6zk"] Dec 12 16:21:20 crc kubenswrapper[4693]: I1212 16:21:20.047146 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-9x29j"] Dec 12 16:21:20 crc kubenswrapper[4693]: I1212 16:21:20.062337 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-3cdf-account-create-update-5dd7p"] Dec 12 16:21:20 crc kubenswrapper[4693]: I1212 16:21:20.075588 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-74dd-account-create-update-5h6zk"] Dec 12 16:21:20 crc kubenswrapper[4693]: I1212 16:21:20.086097 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-3cdf-account-create-update-5dd7p"] Dec 12 16:21:20 crc kubenswrapper[4693]: I1212 16:21:20.096925 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-9x29j"] Dec 12 16:21:21 crc kubenswrapper[4693]: I1212 16:21:21.372197 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ea7a477-0eef-4e79-bd49-d4e152de7553" path="/var/lib/kubelet/pods/0ea7a477-0eef-4e79-bd49-d4e152de7553/volumes" Dec 12 16:21:21 crc kubenswrapper[4693]: I1212 16:21:21.374021 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2244ed54-0d24-4961-b46b-eb6bf52ae2dc" path="/var/lib/kubelet/pods/2244ed54-0d24-4961-b46b-eb6bf52ae2dc/volumes" Dec 12 16:21:21 crc kubenswrapper[4693]: I1212 16:21:21.375039 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bd6e86f-1ef7-4178-8beb-70eadfa60001" path="/var/lib/kubelet/pods/8bd6e86f-1ef7-4178-8beb-70eadfa60001/volumes" Dec 12 16:21:23 crc kubenswrapper[4693]: I1212 16:21:23.043135 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-170d-account-create-update-j789s"] Dec 12 16:21:23 crc kubenswrapper[4693]: I1212 16:21:23.056203 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-ee38-account-create-update-7hnjl"] Dec 12 16:21:23 crc kubenswrapper[4693]: I1212 16:21:23.072790 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-170d-account-create-update-j789s"] Dec 12 16:21:23 crc kubenswrapper[4693]: I1212 16:21:23.092388 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-ee38-account-create-update-7hnjl"] Dec 12 16:21:23 crc kubenswrapper[4693]: I1212 16:21:23.377814 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00d3b68b-c9df-4c00-ae1f-079a98130251" path="/var/lib/kubelet/pods/00d3b68b-c9df-4c00-ae1f-079a98130251/volumes" Dec 12 16:21:23 crc kubenswrapper[4693]: I1212 16:21:23.379200 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874" path="/var/lib/kubelet/pods/3bce3ab0-98cb-4d62-aee5-d2cdcfdd9874/volumes" Dec 12 16:21:42 crc kubenswrapper[4693]: I1212 16:21:42.531240 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:21:42 crc kubenswrapper[4693]: I1212 16:21:42.533718 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:21:42 crc kubenswrapper[4693]: I1212 16:21:42.533957 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 16:21:42 crc kubenswrapper[4693]: I1212 16:21:42.535424 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6c3c61a76193f5b8920ed6ca3953db8f9d6878fcc15435a84ab980dcf1f2a982"} pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:21:42 crc kubenswrapper[4693]: I1212 16:21:42.535765 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" containerID="cri-o://6c3c61a76193f5b8920ed6ca3953db8f9d6878fcc15435a84ab980dcf1f2a982" gracePeriod=600 Dec 12 16:21:43 crc kubenswrapper[4693]: I1212 16:21:43.074224 4693 generic.go:334] "Generic (PLEG): container finished" podID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerID="6c3c61a76193f5b8920ed6ca3953db8f9d6878fcc15435a84ab980dcf1f2a982" exitCode=0 Dec 12 16:21:43 crc kubenswrapper[4693]: I1212 16:21:43.074323 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerDied","Data":"6c3c61a76193f5b8920ed6ca3953db8f9d6878fcc15435a84ab980dcf1f2a982"} Dec 12 16:21:43 crc kubenswrapper[4693]: I1212 16:21:43.074621 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerStarted","Data":"b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64"} Dec 12 16:21:43 crc kubenswrapper[4693]: I1212 16:21:43.074645 4693 scope.go:117] "RemoveContainer" containerID="dd327778eca35cbf142cc22e04ca63aead618aa9ea4df85a1110d6438531fb67" Dec 12 16:21:57 crc kubenswrapper[4693]: I1212 16:21:57.380554 4693 scope.go:117] "RemoveContainer" containerID="a2251985d994295837cb66a8a40ee806370a223d478b57901912d82c8033f2e4" Dec 12 16:21:57 crc kubenswrapper[4693]: I1212 16:21:57.411132 4693 scope.go:117] "RemoveContainer" containerID="a975614ad602e60df12329bae171b3e7306adc87bdff99860a067673c85b6978" Dec 12 16:21:57 crc kubenswrapper[4693]: I1212 16:21:57.466979 4693 scope.go:117] "RemoveContainer" containerID="7f54dcdf921ba41fe1e8076b67643ad5f4e0b6158063a98398a4bb48a12e5889" Dec 12 16:21:57 crc kubenswrapper[4693]: I1212 16:21:57.537460 4693 scope.go:117] "RemoveContainer" containerID="a1a13a252b004359b32de5d2196620bc01615530155c96ef54747c3eb3aeba67" Dec 12 16:21:57 crc kubenswrapper[4693]: I1212 16:21:57.589861 4693 scope.go:117] "RemoveContainer" containerID="abfcee745810706e3cc12fe27567f4011954ef9e61c60de2a02056eda83ea1f3" Dec 12 16:21:57 crc kubenswrapper[4693]: I1212 16:21:57.647013 4693 scope.go:117] "RemoveContainer" containerID="708e5cd1c3747dc8283fef87e0295162c90f581f11018a87fabcc27407436213" Dec 12 16:21:57 crc kubenswrapper[4693]: I1212 16:21:57.710548 4693 scope.go:117] "RemoveContainer" containerID="a0dd93bb76506bb93443932e086984f099e6e2fc799fe8b96feca58b382498f4" Dec 12 16:21:57 crc kubenswrapper[4693]: I1212 16:21:57.733760 4693 scope.go:117] "RemoveContainer" containerID="866f8d3f52abbb76967f5e62e0237bf0d1647a6b636546633de982b63bc59a69" Dec 12 16:21:57 crc kubenswrapper[4693]: I1212 16:21:57.799063 4693 scope.go:117] "RemoveContainer" containerID="ddfa940b4d974a7a9ee4247c01d6cdb79a242a43b2b1e299063d181f4328b89d" Dec 12 16:21:57 crc kubenswrapper[4693]: I1212 16:21:57.846406 4693 scope.go:117] "RemoveContainer" containerID="bd14cc2089b15c29957b5209dba05a6d95edfc3415090bea0378f3bca81ba164" Dec 12 16:21:57 crc kubenswrapper[4693]: I1212 16:21:57.881214 4693 scope.go:117] "RemoveContainer" containerID="3c343840b4d4edbc099d049e17c3504d5f96c7110c5f44e969e612848817a152" Dec 12 16:21:58 crc kubenswrapper[4693]: I1212 16:21:58.079988 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0825-account-create-update-sdz9j"] Dec 12 16:21:58 crc kubenswrapper[4693]: I1212 16:21:58.102987 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-734f-account-create-update-7mp6n"] Dec 12 16:21:58 crc kubenswrapper[4693]: I1212 16:21:58.115469 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-gpffz"] Dec 12 16:21:58 crc kubenswrapper[4693]: I1212 16:21:58.127053 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-8ecf-account-create-update-frq7s"] Dec 12 16:21:58 crc kubenswrapper[4693]: I1212 16:21:58.139333 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0825-account-create-update-sdz9j"] Dec 12 16:21:58 crc kubenswrapper[4693]: I1212 16:21:58.152702 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-gpffz"] Dec 12 16:21:58 crc kubenswrapper[4693]: I1212 16:21:58.165704 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-nxx54"] Dec 12 16:21:58 crc kubenswrapper[4693]: I1212 16:21:58.176382 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0a52-account-create-update-6f92x"] Dec 12 16:21:58 crc kubenswrapper[4693]: I1212 16:21:58.200925 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-kwbtp"] Dec 12 16:21:58 crc kubenswrapper[4693]: I1212 16:21:58.217480 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-734f-account-create-update-7mp6n"] Dec 12 16:21:58 crc kubenswrapper[4693]: I1212 16:21:58.236489 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-8ecf-account-create-update-frq7s"] Dec 12 16:21:58 crc kubenswrapper[4693]: I1212 16:21:58.249388 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-fc4kg"] Dec 12 16:21:58 crc kubenswrapper[4693]: I1212 16:21:58.262592 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-nxx54"] Dec 12 16:21:58 crc kubenswrapper[4693]: I1212 16:21:58.275145 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-kwbtp"] Dec 12 16:21:58 crc kubenswrapper[4693]: I1212 16:21:58.288032 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-0a52-account-create-update-6f92x"] Dec 12 16:21:58 crc kubenswrapper[4693]: I1212 16:21:58.299893 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-fc4kg"] Dec 12 16:21:59 crc kubenswrapper[4693]: I1212 16:21:59.375430 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f2d3ae1-1fd0-4058-9723-e136c74ee739" path="/var/lib/kubelet/pods/2f2d3ae1-1fd0-4058-9723-e136c74ee739/volumes" Dec 12 16:21:59 crc kubenswrapper[4693]: I1212 16:21:59.376741 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f60beb5-6256-43ae-9df7-c57bb4f2d27e" path="/var/lib/kubelet/pods/4f60beb5-6256-43ae-9df7-c57bb4f2d27e/volumes" Dec 12 16:21:59 crc kubenswrapper[4693]: I1212 16:21:59.377844 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8579a0a1-a803-4422-a697-02b34eb25fae" path="/var/lib/kubelet/pods/8579a0a1-a803-4422-a697-02b34eb25fae/volumes" Dec 12 16:21:59 crc kubenswrapper[4693]: I1212 16:21:59.379413 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92ff8de1-8616-43f7-8bd4-1de3f4730b5f" path="/var/lib/kubelet/pods/92ff8de1-8616-43f7-8bd4-1de3f4730b5f/volumes" Dec 12 16:21:59 crc kubenswrapper[4693]: I1212 16:21:59.381053 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9371580e-96d9-4d4e-96ef-d049476af5eb" path="/var/lib/kubelet/pods/9371580e-96d9-4d4e-96ef-d049476af5eb/volumes" Dec 12 16:21:59 crc kubenswrapper[4693]: I1212 16:21:59.382164 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0ae148a-761b-42e5-b88f-909c600a34fe" path="/var/lib/kubelet/pods/c0ae148a-761b-42e5-b88f-909c600a34fe/volumes" Dec 12 16:21:59 crc kubenswrapper[4693]: I1212 16:21:59.384195 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0f26834-34b1-4d79-a777-d080ed1eb981" path="/var/lib/kubelet/pods/d0f26834-34b1-4d79-a777-d080ed1eb981/volumes" Dec 12 16:21:59 crc kubenswrapper[4693]: I1212 16:21:59.386252 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f36c30f6-251f-4311-b1cd-0094934a275a" path="/var/lib/kubelet/pods/f36c30f6-251f-4311-b1cd-0094934a275a/volumes" Dec 12 16:22:05 crc kubenswrapper[4693]: I1212 16:22:05.054300 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-md2sc"] Dec 12 16:22:05 crc kubenswrapper[4693]: I1212 16:22:05.069661 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-md2sc"] Dec 12 16:22:05 crc kubenswrapper[4693]: I1212 16:22:05.374143 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb7abcfa-6215-4393-ab01-7710ddd3055d" path="/var/lib/kubelet/pods/cb7abcfa-6215-4393-ab01-7710ddd3055d/volumes" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.070518 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qgf6n"] Dec 12 16:22:18 crc kubenswrapper[4693]: E1212 16:22:18.071554 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9aa40a-9560-4c6e-81b4-e0960a88c322" containerName="extract-content" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.071575 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9aa40a-9560-4c6e-81b4-e0960a88c322" containerName="extract-content" Dec 12 16:22:18 crc kubenswrapper[4693]: E1212 16:22:18.071594 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f600095c-dcb6-440d-8f24-6a7e045e4e3c" containerName="registry-server" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.071628 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f600095c-dcb6-440d-8f24-6a7e045e4e3c" containerName="registry-server" Dec 12 16:22:18 crc kubenswrapper[4693]: E1212 16:22:18.071640 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9aa40a-9560-4c6e-81b4-e0960a88c322" containerName="extract-utilities" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.071647 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9aa40a-9560-4c6e-81b4-e0960a88c322" containerName="extract-utilities" Dec 12 16:22:18 crc kubenswrapper[4693]: E1212 16:22:18.071663 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e946d2c7-ef68-4660-83c1-57261d7a5e86" containerName="extract-content" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.071669 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="e946d2c7-ef68-4660-83c1-57261d7a5e86" containerName="extract-content" Dec 12 16:22:18 crc kubenswrapper[4693]: E1212 16:22:18.071685 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f600095c-dcb6-440d-8f24-6a7e045e4e3c" containerName="extract-utilities" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.071690 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f600095c-dcb6-440d-8f24-6a7e045e4e3c" containerName="extract-utilities" Dec 12 16:22:18 crc kubenswrapper[4693]: E1212 16:22:18.071713 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f600095c-dcb6-440d-8f24-6a7e045e4e3c" containerName="extract-content" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.071719 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f600095c-dcb6-440d-8f24-6a7e045e4e3c" containerName="extract-content" Dec 12 16:22:18 crc kubenswrapper[4693]: E1212 16:22:18.071732 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e946d2c7-ef68-4660-83c1-57261d7a5e86" containerName="extract-utilities" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.071737 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="e946d2c7-ef68-4660-83c1-57261d7a5e86" containerName="extract-utilities" Dec 12 16:22:18 crc kubenswrapper[4693]: E1212 16:22:18.071745 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e946d2c7-ef68-4660-83c1-57261d7a5e86" containerName="registry-server" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.071751 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="e946d2c7-ef68-4660-83c1-57261d7a5e86" containerName="registry-server" Dec 12 16:22:18 crc kubenswrapper[4693]: E1212 16:22:18.071766 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9aa40a-9560-4c6e-81b4-e0960a88c322" containerName="registry-server" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.071771 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9aa40a-9560-4c6e-81b4-e0960a88c322" containerName="registry-server" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.072026 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="f600095c-dcb6-440d-8f24-6a7e045e4e3c" containerName="registry-server" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.072040 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd9aa40a-9560-4c6e-81b4-e0960a88c322" containerName="registry-server" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.072052 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="e946d2c7-ef68-4660-83c1-57261d7a5e86" containerName="registry-server" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.073937 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgf6n" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.082879 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qgf6n"] Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.130312 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e621f6d7-d379-4e9f-8a18-96e8e81d6e3c-utilities\") pod \"community-operators-qgf6n\" (UID: \"e621f6d7-d379-4e9f-8a18-96e8e81d6e3c\") " pod="openshift-marketplace/community-operators-qgf6n" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.130380 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e621f6d7-d379-4e9f-8a18-96e8e81d6e3c-catalog-content\") pod \"community-operators-qgf6n\" (UID: \"e621f6d7-d379-4e9f-8a18-96e8e81d6e3c\") " pod="openshift-marketplace/community-operators-qgf6n" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.130438 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nf55\" (UniqueName: \"kubernetes.io/projected/e621f6d7-d379-4e9f-8a18-96e8e81d6e3c-kube-api-access-4nf55\") pod \"community-operators-qgf6n\" (UID: \"e621f6d7-d379-4e9f-8a18-96e8e81d6e3c\") " pod="openshift-marketplace/community-operators-qgf6n" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.232917 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e621f6d7-d379-4e9f-8a18-96e8e81d6e3c-utilities\") pod \"community-operators-qgf6n\" (UID: \"e621f6d7-d379-4e9f-8a18-96e8e81d6e3c\") " pod="openshift-marketplace/community-operators-qgf6n" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.232986 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e621f6d7-d379-4e9f-8a18-96e8e81d6e3c-catalog-content\") pod \"community-operators-qgf6n\" (UID: \"e621f6d7-d379-4e9f-8a18-96e8e81d6e3c\") " pod="openshift-marketplace/community-operators-qgf6n" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.233019 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nf55\" (UniqueName: \"kubernetes.io/projected/e621f6d7-d379-4e9f-8a18-96e8e81d6e3c-kube-api-access-4nf55\") pod \"community-operators-qgf6n\" (UID: \"e621f6d7-d379-4e9f-8a18-96e8e81d6e3c\") " pod="openshift-marketplace/community-operators-qgf6n" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.233749 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e621f6d7-d379-4e9f-8a18-96e8e81d6e3c-utilities\") pod \"community-operators-qgf6n\" (UID: \"e621f6d7-d379-4e9f-8a18-96e8e81d6e3c\") " pod="openshift-marketplace/community-operators-qgf6n" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.233856 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e621f6d7-d379-4e9f-8a18-96e8e81d6e3c-catalog-content\") pod \"community-operators-qgf6n\" (UID: \"e621f6d7-d379-4e9f-8a18-96e8e81d6e3c\") " pod="openshift-marketplace/community-operators-qgf6n" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.260436 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nf55\" (UniqueName: \"kubernetes.io/projected/e621f6d7-d379-4e9f-8a18-96e8e81d6e3c-kube-api-access-4nf55\") pod \"community-operators-qgf6n\" (UID: \"e621f6d7-d379-4e9f-8a18-96e8e81d6e3c\") " pod="openshift-marketplace/community-operators-qgf6n" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.417669 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgf6n" Dec 12 16:22:18 crc kubenswrapper[4693]: I1212 16:22:18.996202 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qgf6n"] Dec 12 16:22:19 crc kubenswrapper[4693]: I1212 16:22:19.544302 4693 generic.go:334] "Generic (PLEG): container finished" podID="e621f6d7-d379-4e9f-8a18-96e8e81d6e3c" containerID="e4400a0590a62d98b0634a26859bbfa1993c96b0df13543913b7e39414f68fd7" exitCode=0 Dec 12 16:22:19 crc kubenswrapper[4693]: I1212 16:22:19.544684 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgf6n" event={"ID":"e621f6d7-d379-4e9f-8a18-96e8e81d6e3c","Type":"ContainerDied","Data":"e4400a0590a62d98b0634a26859bbfa1993c96b0df13543913b7e39414f68fd7"} Dec 12 16:22:19 crc kubenswrapper[4693]: I1212 16:22:19.544786 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgf6n" event={"ID":"e621f6d7-d379-4e9f-8a18-96e8e81d6e3c","Type":"ContainerStarted","Data":"b59419b1fb9cd8fbc772e7e0a72ba46095e469bd66452eaeb7fb9865fb73442c"} Dec 12 16:22:20 crc kubenswrapper[4693]: I1212 16:22:20.088168 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-r9p8t"] Dec 12 16:22:20 crc kubenswrapper[4693]: I1212 16:22:20.105608 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-r9p8t"] Dec 12 16:22:20 crc kubenswrapper[4693]: I1212 16:22:20.560703 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgf6n" event={"ID":"e621f6d7-d379-4e9f-8a18-96e8e81d6e3c","Type":"ContainerStarted","Data":"9fe54c52f2539e0f4cebe6422ef42e942f4e4b2b9a788eeae7625506c87c041b"} Dec 12 16:22:21 crc kubenswrapper[4693]: I1212 16:22:21.372511 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56fb0f10-fbce-4aed-9a10-7128021ce48f" path="/var/lib/kubelet/pods/56fb0f10-fbce-4aed-9a10-7128021ce48f/volumes" Dec 12 16:22:21 crc kubenswrapper[4693]: I1212 16:22:21.574616 4693 generic.go:334] "Generic (PLEG): container finished" podID="e621f6d7-d379-4e9f-8a18-96e8e81d6e3c" containerID="9fe54c52f2539e0f4cebe6422ef42e942f4e4b2b9a788eeae7625506c87c041b" exitCode=0 Dec 12 16:22:21 crc kubenswrapper[4693]: I1212 16:22:21.574673 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgf6n" event={"ID":"e621f6d7-d379-4e9f-8a18-96e8e81d6e3c","Type":"ContainerDied","Data":"9fe54c52f2539e0f4cebe6422ef42e942f4e4b2b9a788eeae7625506c87c041b"} Dec 12 16:22:23 crc kubenswrapper[4693]: I1212 16:22:23.604476 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgf6n" event={"ID":"e621f6d7-d379-4e9f-8a18-96e8e81d6e3c","Type":"ContainerStarted","Data":"1fd80cb804fa82dfba8da192326b2ca98c441b70e593261328cfbe836c8b739d"} Dec 12 16:22:23 crc kubenswrapper[4693]: I1212 16:22:23.625906 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qgf6n" podStartSLOduration=2.742339145 podStartE2EDuration="5.625888208s" podCreationTimestamp="2025-12-12 16:22:18 +0000 UTC" firstStartedPulling="2025-12-12 16:22:19.547242134 +0000 UTC m=+2166.715881745" lastFinishedPulling="2025-12-12 16:22:22.430791207 +0000 UTC m=+2169.599430808" observedRunningTime="2025-12-12 16:22:23.619164285 +0000 UTC m=+2170.787803886" watchObservedRunningTime="2025-12-12 16:22:23.625888208 +0000 UTC m=+2170.794527809" Dec 12 16:22:28 crc kubenswrapper[4693]: I1212 16:22:28.418821 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qgf6n" Dec 12 16:22:28 crc kubenswrapper[4693]: I1212 16:22:28.419476 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qgf6n" Dec 12 16:22:28 crc kubenswrapper[4693]: I1212 16:22:28.471696 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qgf6n" Dec 12 16:22:28 crc kubenswrapper[4693]: I1212 16:22:28.662630 4693 generic.go:334] "Generic (PLEG): container finished" podID="eb355093-a7b2-4ab1-a525-a05808e9bd81" containerID="181e9966353cb66c4b1af6370085117eee995200fd1b28d2aa6ea142a8bfb65f" exitCode=0 Dec 12 16:22:28 crc kubenswrapper[4693]: I1212 16:22:28.662666 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf" event={"ID":"eb355093-a7b2-4ab1-a525-a05808e9bd81","Type":"ContainerDied","Data":"181e9966353cb66c4b1af6370085117eee995200fd1b28d2aa6ea142a8bfb65f"} Dec 12 16:22:28 crc kubenswrapper[4693]: I1212 16:22:28.710809 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qgf6n" Dec 12 16:22:28 crc kubenswrapper[4693]: I1212 16:22:28.785742 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qgf6n"] Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.291952 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf" Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.394828 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb355093-a7b2-4ab1-a525-a05808e9bd81-ssh-key\") pod \"eb355093-a7b2-4ab1-a525-a05808e9bd81\" (UID: \"eb355093-a7b2-4ab1-a525-a05808e9bd81\") " Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.394945 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb355093-a7b2-4ab1-a525-a05808e9bd81-bootstrap-combined-ca-bundle\") pod \"eb355093-a7b2-4ab1-a525-a05808e9bd81\" (UID: \"eb355093-a7b2-4ab1-a525-a05808e9bd81\") " Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.394987 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb355093-a7b2-4ab1-a525-a05808e9bd81-inventory\") pod \"eb355093-a7b2-4ab1-a525-a05808e9bd81\" (UID: \"eb355093-a7b2-4ab1-a525-a05808e9bd81\") " Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.395018 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjz95\" (UniqueName: \"kubernetes.io/projected/eb355093-a7b2-4ab1-a525-a05808e9bd81-kube-api-access-jjz95\") pod \"eb355093-a7b2-4ab1-a525-a05808e9bd81\" (UID: \"eb355093-a7b2-4ab1-a525-a05808e9bd81\") " Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.400521 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb355093-a7b2-4ab1-a525-a05808e9bd81-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "eb355093-a7b2-4ab1-a525-a05808e9bd81" (UID: "eb355093-a7b2-4ab1-a525-a05808e9bd81"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.417314 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb355093-a7b2-4ab1-a525-a05808e9bd81-kube-api-access-jjz95" (OuterVolumeSpecName: "kube-api-access-jjz95") pod "eb355093-a7b2-4ab1-a525-a05808e9bd81" (UID: "eb355093-a7b2-4ab1-a525-a05808e9bd81"). InnerVolumeSpecName "kube-api-access-jjz95". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.433639 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb355093-a7b2-4ab1-a525-a05808e9bd81-inventory" (OuterVolumeSpecName: "inventory") pod "eb355093-a7b2-4ab1-a525-a05808e9bd81" (UID: "eb355093-a7b2-4ab1-a525-a05808e9bd81"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.436329 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb355093-a7b2-4ab1-a525-a05808e9bd81-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eb355093-a7b2-4ab1-a525-a05808e9bd81" (UID: "eb355093-a7b2-4ab1-a525-a05808e9bd81"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.500816 4693 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb355093-a7b2-4ab1-a525-a05808e9bd81-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.501040 4693 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb355093-a7b2-4ab1-a525-a05808e9bd81-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.501058 4693 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb355093-a7b2-4ab1-a525-a05808e9bd81-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.501071 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjz95\" (UniqueName: \"kubernetes.io/projected/eb355093-a7b2-4ab1-a525-a05808e9bd81-kube-api-access-jjz95\") on node \"crc\" DevicePath \"\"" Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.699659 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf" event={"ID":"eb355093-a7b2-4ab1-a525-a05808e9bd81","Type":"ContainerDied","Data":"3b8e142e3825ab4366dc32c50abbe99a54c165c36ad160b00b96dd2a416c9d64"} Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.699987 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b8e142e3825ab4366dc32c50abbe99a54c165c36ad160b00b96dd2a416c9d64" Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.699887 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qgf6n" podUID="e621f6d7-d379-4e9f-8a18-96e8e81d6e3c" containerName="registry-server" containerID="cri-o://1fd80cb804fa82dfba8da192326b2ca98c441b70e593261328cfbe836c8b739d" gracePeriod=2 Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.699682 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5t6xf" Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.793572 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8rch"] Dec 12 16:22:30 crc kubenswrapper[4693]: E1212 16:22:30.794058 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb355093-a7b2-4ab1-a525-a05808e9bd81" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.794076 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb355093-a7b2-4ab1-a525-a05808e9bd81" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.794351 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb355093-a7b2-4ab1-a525-a05808e9bd81" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.795302 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8rch" Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.797751 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.797938 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.798165 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.800151 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlgf7" Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.814938 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8rch"] Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.917761 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4427c16b-eff6-4617-849f-68df88868e0f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k8rch\" (UID: \"4427c16b-eff6-4617-849f-68df88868e0f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8rch" Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.917838 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqfbw\" (UniqueName: \"kubernetes.io/projected/4427c16b-eff6-4617-849f-68df88868e0f-kube-api-access-cqfbw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k8rch\" (UID: \"4427c16b-eff6-4617-849f-68df88868e0f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8rch" Dec 12 16:22:30 crc kubenswrapper[4693]: I1212 16:22:30.918223 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4427c16b-eff6-4617-849f-68df88868e0f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k8rch\" (UID: \"4427c16b-eff6-4617-849f-68df88868e0f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8rch" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.020071 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4427c16b-eff6-4617-849f-68df88868e0f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k8rch\" (UID: \"4427c16b-eff6-4617-849f-68df88868e0f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8rch" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.020445 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqfbw\" (UniqueName: \"kubernetes.io/projected/4427c16b-eff6-4617-849f-68df88868e0f-kube-api-access-cqfbw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k8rch\" (UID: \"4427c16b-eff6-4617-849f-68df88868e0f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8rch" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.020547 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4427c16b-eff6-4617-849f-68df88868e0f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k8rch\" (UID: \"4427c16b-eff6-4617-849f-68df88868e0f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8rch" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.029431 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4427c16b-eff6-4617-849f-68df88868e0f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k8rch\" (UID: \"4427c16b-eff6-4617-849f-68df88868e0f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8rch" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.029431 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4427c16b-eff6-4617-849f-68df88868e0f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k8rch\" (UID: \"4427c16b-eff6-4617-849f-68df88868e0f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8rch" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.035344 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqfbw\" (UniqueName: \"kubernetes.io/projected/4427c16b-eff6-4617-849f-68df88868e0f-kube-api-access-cqfbw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k8rch\" (UID: \"4427c16b-eff6-4617-849f-68df88868e0f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8rch" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.113313 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgf6n" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.195380 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8rch" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.225043 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e621f6d7-d379-4e9f-8a18-96e8e81d6e3c-catalog-content\") pod \"e621f6d7-d379-4e9f-8a18-96e8e81d6e3c\" (UID: \"e621f6d7-d379-4e9f-8a18-96e8e81d6e3c\") " Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.225126 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e621f6d7-d379-4e9f-8a18-96e8e81d6e3c-utilities\") pod \"e621f6d7-d379-4e9f-8a18-96e8e81d6e3c\" (UID: \"e621f6d7-d379-4e9f-8a18-96e8e81d6e3c\") " Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.225565 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nf55\" (UniqueName: \"kubernetes.io/projected/e621f6d7-d379-4e9f-8a18-96e8e81d6e3c-kube-api-access-4nf55\") pod \"e621f6d7-d379-4e9f-8a18-96e8e81d6e3c\" (UID: \"e621f6d7-d379-4e9f-8a18-96e8e81d6e3c\") " Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.226257 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e621f6d7-d379-4e9f-8a18-96e8e81d6e3c-utilities" (OuterVolumeSpecName: "utilities") pod "e621f6d7-d379-4e9f-8a18-96e8e81d6e3c" (UID: "e621f6d7-d379-4e9f-8a18-96e8e81d6e3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.226611 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e621f6d7-d379-4e9f-8a18-96e8e81d6e3c-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.231000 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e621f6d7-d379-4e9f-8a18-96e8e81d6e3c-kube-api-access-4nf55" (OuterVolumeSpecName: "kube-api-access-4nf55") pod "e621f6d7-d379-4e9f-8a18-96e8e81d6e3c" (UID: "e621f6d7-d379-4e9f-8a18-96e8e81d6e3c"). InnerVolumeSpecName "kube-api-access-4nf55". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.282832 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e621f6d7-d379-4e9f-8a18-96e8e81d6e3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e621f6d7-d379-4e9f-8a18-96e8e81d6e3c" (UID: "e621f6d7-d379-4e9f-8a18-96e8e81d6e3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.328789 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e621f6d7-d379-4e9f-8a18-96e8e81d6e3c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.329193 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nf55\" (UniqueName: \"kubernetes.io/projected/e621f6d7-d379-4e9f-8a18-96e8e81d6e3c-kube-api-access-4nf55\") on node \"crc\" DevicePath \"\"" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.715137 4693 generic.go:334] "Generic (PLEG): container finished" podID="e621f6d7-d379-4e9f-8a18-96e8e81d6e3c" containerID="1fd80cb804fa82dfba8da192326b2ca98c441b70e593261328cfbe836c8b739d" exitCode=0 Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.715172 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgf6n" event={"ID":"e621f6d7-d379-4e9f-8a18-96e8e81d6e3c","Type":"ContainerDied","Data":"1fd80cb804fa82dfba8da192326b2ca98c441b70e593261328cfbe836c8b739d"} Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.715199 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgf6n" event={"ID":"e621f6d7-d379-4e9f-8a18-96e8e81d6e3c","Type":"ContainerDied","Data":"b59419b1fb9cd8fbc772e7e0a72ba46095e469bd66452eaeb7fb9865fb73442c"} Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.715218 4693 scope.go:117] "RemoveContainer" containerID="1fd80cb804fa82dfba8da192326b2ca98c441b70e593261328cfbe836c8b739d" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.715367 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgf6n" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.750825 4693 scope.go:117] "RemoveContainer" containerID="9fe54c52f2539e0f4cebe6422ef42e942f4e4b2b9a788eeae7625506c87c041b" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.761577 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qgf6n"] Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.777068 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qgf6n"] Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.782611 4693 scope.go:117] "RemoveContainer" containerID="e4400a0590a62d98b0634a26859bbfa1993c96b0df13543913b7e39414f68fd7" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.821507 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8rch"] Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.937005 4693 scope.go:117] "RemoveContainer" containerID="1fd80cb804fa82dfba8da192326b2ca98c441b70e593261328cfbe836c8b739d" Dec 12 16:22:31 crc kubenswrapper[4693]: E1212 16:22:31.937548 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fd80cb804fa82dfba8da192326b2ca98c441b70e593261328cfbe836c8b739d\": container with ID starting with 1fd80cb804fa82dfba8da192326b2ca98c441b70e593261328cfbe836c8b739d not found: ID does not exist" containerID="1fd80cb804fa82dfba8da192326b2ca98c441b70e593261328cfbe836c8b739d" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.937601 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fd80cb804fa82dfba8da192326b2ca98c441b70e593261328cfbe836c8b739d"} err="failed to get container status \"1fd80cb804fa82dfba8da192326b2ca98c441b70e593261328cfbe836c8b739d\": rpc error: code = NotFound desc = could not find container \"1fd80cb804fa82dfba8da192326b2ca98c441b70e593261328cfbe836c8b739d\": container with ID starting with 1fd80cb804fa82dfba8da192326b2ca98c441b70e593261328cfbe836c8b739d not found: ID does not exist" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.937636 4693 scope.go:117] "RemoveContainer" containerID="9fe54c52f2539e0f4cebe6422ef42e942f4e4b2b9a788eeae7625506c87c041b" Dec 12 16:22:31 crc kubenswrapper[4693]: E1212 16:22:31.938422 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fe54c52f2539e0f4cebe6422ef42e942f4e4b2b9a788eeae7625506c87c041b\": container with ID starting with 9fe54c52f2539e0f4cebe6422ef42e942f4e4b2b9a788eeae7625506c87c041b not found: ID does not exist" containerID="9fe54c52f2539e0f4cebe6422ef42e942f4e4b2b9a788eeae7625506c87c041b" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.938449 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fe54c52f2539e0f4cebe6422ef42e942f4e4b2b9a788eeae7625506c87c041b"} err="failed to get container status \"9fe54c52f2539e0f4cebe6422ef42e942f4e4b2b9a788eeae7625506c87c041b\": rpc error: code = NotFound desc = could not find container \"9fe54c52f2539e0f4cebe6422ef42e942f4e4b2b9a788eeae7625506c87c041b\": container with ID starting with 9fe54c52f2539e0f4cebe6422ef42e942f4e4b2b9a788eeae7625506c87c041b not found: ID does not exist" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.938469 4693 scope.go:117] "RemoveContainer" containerID="e4400a0590a62d98b0634a26859bbfa1993c96b0df13543913b7e39414f68fd7" Dec 12 16:22:31 crc kubenswrapper[4693]: E1212 16:22:31.938904 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4400a0590a62d98b0634a26859bbfa1993c96b0df13543913b7e39414f68fd7\": container with ID starting with e4400a0590a62d98b0634a26859bbfa1993c96b0df13543913b7e39414f68fd7 not found: ID does not exist" containerID="e4400a0590a62d98b0634a26859bbfa1993c96b0df13543913b7e39414f68fd7" Dec 12 16:22:31 crc kubenswrapper[4693]: I1212 16:22:31.938926 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4400a0590a62d98b0634a26859bbfa1993c96b0df13543913b7e39414f68fd7"} err="failed to get container status \"e4400a0590a62d98b0634a26859bbfa1993c96b0df13543913b7e39414f68fd7\": rpc error: code = NotFound desc = could not find container \"e4400a0590a62d98b0634a26859bbfa1993c96b0df13543913b7e39414f68fd7\": container with ID starting with e4400a0590a62d98b0634a26859bbfa1993c96b0df13543913b7e39414f68fd7 not found: ID does not exist" Dec 12 16:22:32 crc kubenswrapper[4693]: I1212 16:22:32.728931 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8rch" event={"ID":"4427c16b-eff6-4617-849f-68df88868e0f","Type":"ContainerStarted","Data":"9beab5db278afbf320c6098c6637178140387513db26425e08614a23b1ac2071"} Dec 12 16:22:32 crc kubenswrapper[4693]: I1212 16:22:32.729355 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8rch" event={"ID":"4427c16b-eff6-4617-849f-68df88868e0f","Type":"ContainerStarted","Data":"d39049402b97561c9cff7d95350b3b37a7341603fbc5b7d94b664d2146fc2a88"} Dec 12 16:22:32 crc kubenswrapper[4693]: I1212 16:22:32.752604 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8rch" podStartSLOduration=2.144489218 podStartE2EDuration="2.752578348s" podCreationTimestamp="2025-12-12 16:22:30 +0000 UTC" firstStartedPulling="2025-12-12 16:22:31.828356899 +0000 UTC m=+2178.996996500" lastFinishedPulling="2025-12-12 16:22:32.436446029 +0000 UTC m=+2179.605085630" observedRunningTime="2025-12-12 16:22:32.745140686 +0000 UTC m=+2179.913780287" watchObservedRunningTime="2025-12-12 16:22:32.752578348 +0000 UTC m=+2179.921217979" Dec 12 16:22:33 crc kubenswrapper[4693]: I1212 16:22:33.379116 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e621f6d7-d379-4e9f-8a18-96e8e81d6e3c" path="/var/lib/kubelet/pods/e621f6d7-d379-4e9f-8a18-96e8e81d6e3c/volumes" Dec 12 16:22:45 crc kubenswrapper[4693]: I1212 16:22:45.075016 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-55c8q"] Dec 12 16:22:45 crc kubenswrapper[4693]: I1212 16:22:45.097380 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-55c8q"] Dec 12 16:22:45 crc kubenswrapper[4693]: I1212 16:22:45.372116 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a29594ee-7274-4690-a79d-ef6bd3a8a2fd" path="/var/lib/kubelet/pods/a29594ee-7274-4690-a79d-ef6bd3a8a2fd/volumes" Dec 12 16:22:57 crc kubenswrapper[4693]: I1212 16:22:57.045178 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8d2xs"] Dec 12 16:22:57 crc kubenswrapper[4693]: I1212 16:22:57.060134 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-26nbh"] Dec 12 16:22:57 crc kubenswrapper[4693]: I1212 16:22:57.075127 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8d2xs"] Dec 12 16:22:57 crc kubenswrapper[4693]: I1212 16:22:57.085207 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-26nbh"] Dec 12 16:22:57 crc kubenswrapper[4693]: I1212 16:22:57.372971 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9560987d-9cb8-4361-9c44-3be630e46634" path="/var/lib/kubelet/pods/9560987d-9cb8-4361-9c44-3be630e46634/volumes" Dec 12 16:22:57 crc kubenswrapper[4693]: I1212 16:22:57.374402 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a85781c9-4fb2-4c98-802d-d9fa60ff72e0" path="/var/lib/kubelet/pods/a85781c9-4fb2-4c98-802d-d9fa60ff72e0/volumes" Dec 12 16:22:58 crc kubenswrapper[4693]: I1212 16:22:58.219008 4693 scope.go:117] "RemoveContainer" containerID="6190cd6e7a5d693d3be9f8e486f66805061313fefcc5db27911d0f2cbdbd6bbd" Dec 12 16:22:58 crc kubenswrapper[4693]: I1212 16:22:58.248711 4693 scope.go:117] "RemoveContainer" containerID="f44acde98f21f3b2b2c50cdbc28f448522796792d9521a3168a0ae98ab025817" Dec 12 16:22:58 crc kubenswrapper[4693]: I1212 16:22:58.313183 4693 scope.go:117] "RemoveContainer" containerID="4fecf5693d85a30d15126bf103858e0db32eeb536d5a364d5811de86d997818f" Dec 12 16:22:58 crc kubenswrapper[4693]: I1212 16:22:58.376375 4693 scope.go:117] "RemoveContainer" containerID="318c91efc08fa6feccde5468ecdf5ab4d6c67c2147e6f7014d182371342bbd32" Dec 12 16:22:58 crc kubenswrapper[4693]: I1212 16:22:58.434577 4693 scope.go:117] "RemoveContainer" containerID="f22fb99ecf781d897c8de73974db44be5f6cb5c0848f6f592e99eff8c1281886" Dec 12 16:22:58 crc kubenswrapper[4693]: I1212 16:22:58.486436 4693 scope.go:117] "RemoveContainer" containerID="24ee73087c6851104146242608495e6d664845962234a77ad7bec9ce8b9dd35a" Dec 12 16:22:58 crc kubenswrapper[4693]: I1212 16:22:58.532712 4693 scope.go:117] "RemoveContainer" containerID="28fd08091e76687fb5188f41be0dea9c01a9ba08ff14b73e9d3358907e805303" Dec 12 16:22:58 crc kubenswrapper[4693]: I1212 16:22:58.561575 4693 scope.go:117] "RemoveContainer" containerID="5eef873a8017a549f187bc0ed2b233785fd9d86af8e0baa9a36edb40782e14c7" Dec 12 16:22:58 crc kubenswrapper[4693]: I1212 16:22:58.581842 4693 scope.go:117] "RemoveContainer" containerID="4e78a73f50845b9bfe7d7d087a29214f988f35e70bcab7a38fb2efa15f2a851d" Dec 12 16:22:58 crc kubenswrapper[4693]: I1212 16:22:58.612777 4693 scope.go:117] "RemoveContainer" containerID="22949cb7572fb584130285c71a4d0da6ee9debc53e44264501698c973035e19d" Dec 12 16:22:58 crc kubenswrapper[4693]: I1212 16:22:58.642241 4693 scope.go:117] "RemoveContainer" containerID="9386fbc3721c2552e9b2cc95c711d33dcf7c2e5f852515283d0397532b043493" Dec 12 16:22:58 crc kubenswrapper[4693]: I1212 16:22:58.667426 4693 scope.go:117] "RemoveContainer" containerID="dbd2364c35e67e76146c6f002481a8873206dc1d6bada39b3c602172035c8c65" Dec 12 16:22:58 crc kubenswrapper[4693]: I1212 16:22:58.690113 4693 scope.go:117] "RemoveContainer" containerID="17fa181b0cc0318095aad6969966cc6e3cae07471badac880e252e96913a8167" Dec 12 16:23:10 crc kubenswrapper[4693]: I1212 16:23:10.049213 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-pxq6t"] Dec 12 16:23:10 crc kubenswrapper[4693]: I1212 16:23:10.060647 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-pxq6t"] Dec 12 16:23:11 crc kubenswrapper[4693]: I1212 16:23:11.385383 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f6afc80-5a96-44ee-98c0-89a474913867" path="/var/lib/kubelet/pods/1f6afc80-5a96-44ee-98c0-89a474913867/volumes" Dec 12 16:23:19 crc kubenswrapper[4693]: I1212 16:23:19.033379 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-6fb94"] Dec 12 16:23:19 crc kubenswrapper[4693]: I1212 16:23:19.045850 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-6fb94"] Dec 12 16:23:19 crc kubenswrapper[4693]: I1212 16:23:19.373509 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42ae7c15-9f4d-4ef8-83d7-279226e74846" path="/var/lib/kubelet/pods/42ae7c15-9f4d-4ef8-83d7-279226e74846/volumes" Dec 12 16:23:42 crc kubenswrapper[4693]: I1212 16:23:42.530926 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:23:42 crc kubenswrapper[4693]: I1212 16:23:42.531754 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:23:58 crc kubenswrapper[4693]: I1212 16:23:58.043413 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-v7mxb"] Dec 12 16:23:58 crc kubenswrapper[4693]: I1212 16:23:58.059250 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-v7mxb"] Dec 12 16:23:58 crc kubenswrapper[4693]: I1212 16:23:58.967584 4693 scope.go:117] "RemoveContainer" containerID="f9aa529b375e788b21d532f313443a601c90560e7b297c7e6c3a53ca6c35874c" Dec 12 16:23:59 crc kubenswrapper[4693]: I1212 16:23:59.018790 4693 scope.go:117] "RemoveContainer" containerID="145b98f988ff459d044a19e450de0cc2029fd1ca3616f4857f7cb6ec6284f26b" Dec 12 16:23:59 crc kubenswrapper[4693]: I1212 16:23:59.053866 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-1b78-account-create-update-c757l"] Dec 12 16:23:59 crc kubenswrapper[4693]: I1212 16:23:59.069032 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-b930-account-create-update-l42mn"] Dec 12 16:23:59 crc kubenswrapper[4693]: I1212 16:23:59.080526 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-jq8js"] Dec 12 16:23:59 crc kubenswrapper[4693]: I1212 16:23:59.093821 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-b930-account-create-update-l42mn"] Dec 12 16:23:59 crc kubenswrapper[4693]: I1212 16:23:59.111938 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-1b78-account-create-update-c757l"] Dec 12 16:23:59 crc kubenswrapper[4693]: I1212 16:23:59.130875 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-jq8js"] Dec 12 16:23:59 crc kubenswrapper[4693]: I1212 16:23:59.374507 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="187aaad3-d0f1-4f94-8ee3-05b2df3da7f4" path="/var/lib/kubelet/pods/187aaad3-d0f1-4f94-8ee3-05b2df3da7f4/volumes" Dec 12 16:23:59 crc kubenswrapper[4693]: I1212 16:23:59.376321 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f3fc160-512e-4b69-8997-696c7b80c676" path="/var/lib/kubelet/pods/1f3fc160-512e-4b69-8997-696c7b80c676/volumes" Dec 12 16:23:59 crc kubenswrapper[4693]: I1212 16:23:59.377375 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e828b00-d636-4aee-8eae-5bb405a342d7" path="/var/lib/kubelet/pods/8e828b00-d636-4aee-8eae-5bb405a342d7/volumes" Dec 12 16:23:59 crc kubenswrapper[4693]: I1212 16:23:59.378951 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc0d7ecc-b112-46ba-aceb-681e37cb50a3" path="/var/lib/kubelet/pods/dc0d7ecc-b112-46ba-aceb-681e37cb50a3/volumes" Dec 12 16:24:00 crc kubenswrapper[4693]: I1212 16:24:00.030253 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-7e7f-account-create-update-f9glz"] Dec 12 16:24:00 crc kubenswrapper[4693]: I1212 16:24:00.058147 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-5lhks"] Dec 12 16:24:00 crc kubenswrapper[4693]: I1212 16:24:00.074145 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-7e7f-account-create-update-f9glz"] Dec 12 16:24:00 crc kubenswrapper[4693]: I1212 16:24:00.088887 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-5lhks"] Dec 12 16:24:01 crc kubenswrapper[4693]: I1212 16:24:01.371893 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b08fa7b7-2bf5-4695-be3c-4a455172a896" path="/var/lib/kubelet/pods/b08fa7b7-2bf5-4695-be3c-4a455172a896/volumes" Dec 12 16:24:01 crc kubenswrapper[4693]: I1212 16:24:01.373191 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9881347-9d47-4c92-93ec-aeee80ff784d" path="/var/lib/kubelet/pods/c9881347-9d47-4c92-93ec-aeee80ff784d/volumes" Dec 12 16:24:12 crc kubenswrapper[4693]: I1212 16:24:12.530527 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:24:12 crc kubenswrapper[4693]: I1212 16:24:12.531159 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:24:38 crc kubenswrapper[4693]: I1212 16:24:38.094421 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dgtln"] Dec 12 16:24:38 crc kubenswrapper[4693]: I1212 16:24:38.110003 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dgtln"] Dec 12 16:24:39 crc kubenswrapper[4693]: I1212 16:24:39.375738 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55ff1574-a937-497e-9e7a-0589ee9732bf" path="/var/lib/kubelet/pods/55ff1574-a937-497e-9e7a-0589ee9732bf/volumes" Dec 12 16:24:42 crc kubenswrapper[4693]: I1212 16:24:42.530201 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:24:42 crc kubenswrapper[4693]: I1212 16:24:42.530682 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:24:42 crc kubenswrapper[4693]: I1212 16:24:42.530735 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 16:24:42 crc kubenswrapper[4693]: I1212 16:24:42.531725 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64"} pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:24:42 crc kubenswrapper[4693]: I1212 16:24:42.531778 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" containerID="cri-o://b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" gracePeriod=600 Dec 12 16:24:42 crc kubenswrapper[4693]: E1212 16:24:42.662350 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:24:43 crc kubenswrapper[4693]: I1212 16:24:43.349879 4693 generic.go:334] "Generic (PLEG): container finished" podID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" exitCode=0 Dec 12 16:24:43 crc kubenswrapper[4693]: I1212 16:24:43.349933 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerDied","Data":"b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64"} Dec 12 16:24:43 crc kubenswrapper[4693]: I1212 16:24:43.349971 4693 scope.go:117] "RemoveContainer" containerID="6c3c61a76193f5b8920ed6ca3953db8f9d6878fcc15435a84ab980dcf1f2a982" Dec 12 16:24:43 crc kubenswrapper[4693]: I1212 16:24:43.350968 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:24:43 crc kubenswrapper[4693]: E1212 16:24:43.351474 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:24:55 crc kubenswrapper[4693]: I1212 16:24:55.357317 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:24:55 crc kubenswrapper[4693]: E1212 16:24:55.359772 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:24:58 crc kubenswrapper[4693]: I1212 16:24:58.037528 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-b95d-account-create-update-km26x"] Dec 12 16:24:58 crc kubenswrapper[4693]: I1212 16:24:58.052009 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-b95d-account-create-update-km26x"] Dec 12 16:24:58 crc kubenswrapper[4693]: I1212 16:24:58.541708 4693 generic.go:334] "Generic (PLEG): container finished" podID="4427c16b-eff6-4617-849f-68df88868e0f" containerID="9beab5db278afbf320c6098c6637178140387513db26425e08614a23b1ac2071" exitCode=0 Dec 12 16:24:58 crc kubenswrapper[4693]: I1212 16:24:58.541751 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8rch" event={"ID":"4427c16b-eff6-4617-849f-68df88868e0f","Type":"ContainerDied","Data":"9beab5db278afbf320c6098c6637178140387513db26425e08614a23b1ac2071"} Dec 12 16:24:59 crc kubenswrapper[4693]: I1212 16:24:59.037509 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-4447h"] Dec 12 16:24:59 crc kubenswrapper[4693]: I1212 16:24:59.055750 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-4447h"] Dec 12 16:24:59 crc kubenswrapper[4693]: I1212 16:24:59.113317 4693 scope.go:117] "RemoveContainer" containerID="9583aead689f48f7e4ba54ff58c339653bc811a85cedf5ad72c9571f63d86846" Dec 12 16:24:59 crc kubenswrapper[4693]: I1212 16:24:59.148336 4693 scope.go:117] "RemoveContainer" containerID="8bec8dcd017a048656c8f84dcb72cf3eff18e1d7a1fcb216aad90dcf3d1ce5fd" Dec 12 16:24:59 crc kubenswrapper[4693]: I1212 16:24:59.236693 4693 scope.go:117] "RemoveContainer" containerID="7df58e7abad652dfa43b77478227c0fcad50dab989ea9ba8265662987455f1db" Dec 12 16:24:59 crc kubenswrapper[4693]: I1212 16:24:59.287185 4693 scope.go:117] "RemoveContainer" containerID="5d18b0825ee2619dfc29b537dedc6dcd086a1c509f0e5fe57de67f4502904f9d" Dec 12 16:24:59 crc kubenswrapper[4693]: I1212 16:24:59.335618 4693 scope.go:117] "RemoveContainer" containerID="2f895418ec16da8034a1cb9c42a0c3824a1c993b70c6b7cef24a018c83809510" Dec 12 16:24:59 crc kubenswrapper[4693]: I1212 16:24:59.378166 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e9c57e3-dcab-45d3-9013-a7348e6d94ec" path="/var/lib/kubelet/pods/5e9c57e3-dcab-45d3-9013-a7348e6d94ec/volumes" Dec 12 16:24:59 crc kubenswrapper[4693]: I1212 16:24:59.386485 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6250357e-de93-4024-acda-b1ebbf788eca" path="/var/lib/kubelet/pods/6250357e-de93-4024-acda-b1ebbf788eca/volumes" Dec 12 16:24:59 crc kubenswrapper[4693]: I1212 16:24:59.411875 4693 scope.go:117] "RemoveContainer" containerID="ea68d47a0aed1c22031e57d1d7613edfba0f043dad64ba4505a75d644f0df2cb" Dec 12 16:24:59 crc kubenswrapper[4693]: I1212 16:24:59.465750 4693 scope.go:117] "RemoveContainer" containerID="696e8a8d6374118f885fda0c957bb85089f29c644bb9649b18ab2738fee3b640" Dec 12 16:24:59 crc kubenswrapper[4693]: E1212 16:24:59.864184 4693 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.002955 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8rch" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.130018 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqfbw\" (UniqueName: \"kubernetes.io/projected/4427c16b-eff6-4617-849f-68df88868e0f-kube-api-access-cqfbw\") pod \"4427c16b-eff6-4617-849f-68df88868e0f\" (UID: \"4427c16b-eff6-4617-849f-68df88868e0f\") " Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.130068 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4427c16b-eff6-4617-849f-68df88868e0f-ssh-key\") pod \"4427c16b-eff6-4617-849f-68df88868e0f\" (UID: \"4427c16b-eff6-4617-849f-68df88868e0f\") " Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.130245 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4427c16b-eff6-4617-849f-68df88868e0f-inventory\") pod \"4427c16b-eff6-4617-849f-68df88868e0f\" (UID: \"4427c16b-eff6-4617-849f-68df88868e0f\") " Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.137722 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4427c16b-eff6-4617-849f-68df88868e0f-kube-api-access-cqfbw" (OuterVolumeSpecName: "kube-api-access-cqfbw") pod "4427c16b-eff6-4617-849f-68df88868e0f" (UID: "4427c16b-eff6-4617-849f-68df88868e0f"). InnerVolumeSpecName "kube-api-access-cqfbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.161878 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4427c16b-eff6-4617-849f-68df88868e0f-inventory" (OuterVolumeSpecName: "inventory") pod "4427c16b-eff6-4617-849f-68df88868e0f" (UID: "4427c16b-eff6-4617-849f-68df88868e0f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.164050 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4427c16b-eff6-4617-849f-68df88868e0f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4427c16b-eff6-4617-849f-68df88868e0f" (UID: "4427c16b-eff6-4617-849f-68df88868e0f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.236935 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqfbw\" (UniqueName: \"kubernetes.io/projected/4427c16b-eff6-4617-849f-68df88868e0f-kube-api-access-cqfbw\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.236973 4693 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4427c16b-eff6-4617-849f-68df88868e0f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.236984 4693 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4427c16b-eff6-4617-849f-68df88868e0f-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.594181 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8rch" event={"ID":"4427c16b-eff6-4617-849f-68df88868e0f","Type":"ContainerDied","Data":"d39049402b97561c9cff7d95350b3b37a7341603fbc5b7d94b664d2146fc2a88"} Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.594598 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d39049402b97561c9cff7d95350b3b37a7341603fbc5b7d94b664d2146fc2a88" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.594390 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8rch" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.707655 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4svb"] Dec 12 16:25:00 crc kubenswrapper[4693]: E1212 16:25:00.708479 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4427c16b-eff6-4617-849f-68df88868e0f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.708510 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="4427c16b-eff6-4617-849f-68df88868e0f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 12 16:25:00 crc kubenswrapper[4693]: E1212 16:25:00.708524 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e621f6d7-d379-4e9f-8a18-96e8e81d6e3c" containerName="extract-content" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.708532 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="e621f6d7-d379-4e9f-8a18-96e8e81d6e3c" containerName="extract-content" Dec 12 16:25:00 crc kubenswrapper[4693]: E1212 16:25:00.708564 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e621f6d7-d379-4e9f-8a18-96e8e81d6e3c" containerName="registry-server" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.708572 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="e621f6d7-d379-4e9f-8a18-96e8e81d6e3c" containerName="registry-server" Dec 12 16:25:00 crc kubenswrapper[4693]: E1212 16:25:00.708613 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e621f6d7-d379-4e9f-8a18-96e8e81d6e3c" containerName="extract-utilities" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.708622 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="e621f6d7-d379-4e9f-8a18-96e8e81d6e3c" containerName="extract-utilities" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.708930 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="4427c16b-eff6-4617-849f-68df88868e0f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.708970 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="e621f6d7-d379-4e9f-8a18-96e8e81d6e3c" containerName="registry-server" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.710160 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4svb" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.712583 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.712952 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.713169 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.717514 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlgf7" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.727141 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4svb"] Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.856525 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx8tj\" (UniqueName: \"kubernetes.io/projected/83c53cdf-8b08-4976-a4ff-3b8e1e921731-kube-api-access-qx8tj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r4svb\" (UID: \"83c53cdf-8b08-4976-a4ff-3b8e1e921731\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4svb" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.856717 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83c53cdf-8b08-4976-a4ff-3b8e1e921731-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r4svb\" (UID: \"83c53cdf-8b08-4976-a4ff-3b8e1e921731\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4svb" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.856844 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/83c53cdf-8b08-4976-a4ff-3b8e1e921731-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r4svb\" (UID: \"83c53cdf-8b08-4976-a4ff-3b8e1e921731\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4svb" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.958352 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83c53cdf-8b08-4976-a4ff-3b8e1e921731-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r4svb\" (UID: \"83c53cdf-8b08-4976-a4ff-3b8e1e921731\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4svb" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.958458 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/83c53cdf-8b08-4976-a4ff-3b8e1e921731-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r4svb\" (UID: \"83c53cdf-8b08-4976-a4ff-3b8e1e921731\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4svb" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.958540 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx8tj\" (UniqueName: \"kubernetes.io/projected/83c53cdf-8b08-4976-a4ff-3b8e1e921731-kube-api-access-qx8tj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r4svb\" (UID: \"83c53cdf-8b08-4976-a4ff-3b8e1e921731\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4svb" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.964663 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/83c53cdf-8b08-4976-a4ff-3b8e1e921731-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r4svb\" (UID: \"83c53cdf-8b08-4976-a4ff-3b8e1e921731\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4svb" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.964688 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83c53cdf-8b08-4976-a4ff-3b8e1e921731-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r4svb\" (UID: \"83c53cdf-8b08-4976-a4ff-3b8e1e921731\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4svb" Dec 12 16:25:00 crc kubenswrapper[4693]: I1212 16:25:00.977733 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx8tj\" (UniqueName: \"kubernetes.io/projected/83c53cdf-8b08-4976-a4ff-3b8e1e921731-kube-api-access-qx8tj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r4svb\" (UID: \"83c53cdf-8b08-4976-a4ff-3b8e1e921731\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4svb" Dec 12 16:25:01 crc kubenswrapper[4693]: I1212 16:25:01.028432 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4svb" Dec 12 16:25:01 crc kubenswrapper[4693]: I1212 16:25:01.627438 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4svb"] Dec 12 16:25:01 crc kubenswrapper[4693]: W1212 16:25:01.631669 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83c53cdf_8b08_4976_a4ff_3b8e1e921731.slice/crio-d0c6ab67fa5182ca149013ea0239d1705fdc7c2ae20cfa8549fbcd3b5038ff60 WatchSource:0}: Error finding container d0c6ab67fa5182ca149013ea0239d1705fdc7c2ae20cfa8549fbcd3b5038ff60: Status 404 returned error can't find the container with id d0c6ab67fa5182ca149013ea0239d1705fdc7c2ae20cfa8549fbcd3b5038ff60 Dec 12 16:25:02 crc kubenswrapper[4693]: I1212 16:25:02.637555 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4svb" event={"ID":"83c53cdf-8b08-4976-a4ff-3b8e1e921731","Type":"ContainerStarted","Data":"a9d2fd14f9baed7588eafb4c23d10e04246e00b2582a1d0f837f57da81a219df"} Dec 12 16:25:02 crc kubenswrapper[4693]: I1212 16:25:02.638172 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4svb" event={"ID":"83c53cdf-8b08-4976-a4ff-3b8e1e921731","Type":"ContainerStarted","Data":"d0c6ab67fa5182ca149013ea0239d1705fdc7c2ae20cfa8549fbcd3b5038ff60"} Dec 12 16:25:02 crc kubenswrapper[4693]: I1212 16:25:02.664023 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4svb" podStartSLOduration=1.97132613 podStartE2EDuration="2.664000565s" podCreationTimestamp="2025-12-12 16:25:00 +0000 UTC" firstStartedPulling="2025-12-12 16:25:01.634085344 +0000 UTC m=+2328.802724955" lastFinishedPulling="2025-12-12 16:25:02.326759789 +0000 UTC m=+2329.495399390" observedRunningTime="2025-12-12 16:25:02.653552482 +0000 UTC m=+2329.822192103" watchObservedRunningTime="2025-12-12 16:25:02.664000565 +0000 UTC m=+2329.832640166" Dec 12 16:25:10 crc kubenswrapper[4693]: I1212 16:25:10.058173 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mp8v7"] Dec 12 16:25:10 crc kubenswrapper[4693]: I1212 16:25:10.071911 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-jm5sn"] Dec 12 16:25:10 crc kubenswrapper[4693]: I1212 16:25:10.081971 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mp8v7"] Dec 12 16:25:10 crc kubenswrapper[4693]: I1212 16:25:10.095222 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-jm5sn"] Dec 12 16:25:10 crc kubenswrapper[4693]: I1212 16:25:10.358279 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:25:10 crc kubenswrapper[4693]: E1212 16:25:10.358780 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:25:11 crc kubenswrapper[4693]: I1212 16:25:11.374494 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f211a4d-5e81-46e3-a31f-8f359ba8da6f" path="/var/lib/kubelet/pods/0f211a4d-5e81-46e3-a31f-8f359ba8da6f/volumes" Dec 12 16:25:11 crc kubenswrapper[4693]: I1212 16:25:11.376611 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47fe2ff8-dc3c-4e98-8502-da788efb57aa" path="/var/lib/kubelet/pods/47fe2ff8-dc3c-4e98-8502-da788efb57aa/volumes" Dec 12 16:25:24 crc kubenswrapper[4693]: I1212 16:25:24.357980 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:25:24 crc kubenswrapper[4693]: E1212 16:25:24.359523 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:25:38 crc kubenswrapper[4693]: I1212 16:25:38.360741 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:25:38 crc kubenswrapper[4693]: E1212 16:25:38.361676 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:25:50 crc kubenswrapper[4693]: I1212 16:25:50.360016 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:25:50 crc kubenswrapper[4693]: E1212 16:25:50.362402 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:25:51 crc kubenswrapper[4693]: I1212 16:25:51.044030 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-nrzkw"] Dec 12 16:25:51 crc kubenswrapper[4693]: I1212 16:25:51.055245 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-nrzkw"] Dec 12 16:25:51 crc kubenswrapper[4693]: I1212 16:25:51.370042 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21e1cc20-3fe3-4674-97a4-0489caa4588e" path="/var/lib/kubelet/pods/21e1cc20-3fe3-4674-97a4-0489caa4588e/volumes" Dec 12 16:25:59 crc kubenswrapper[4693]: I1212 16:25:59.663992 4693 scope.go:117] "RemoveContainer" containerID="9e0ef1f546e0f10dd65c70ed45c63defd4a8d3458e620a2ffdb68ada4e1b1135" Dec 12 16:25:59 crc kubenswrapper[4693]: I1212 16:25:59.717068 4693 scope.go:117] "RemoveContainer" containerID="6bca49c660e88075753b269ec3c652bb4f3ec69effaa3def39b1bd26971e6ae8" Dec 12 16:25:59 crc kubenswrapper[4693]: I1212 16:25:59.803539 4693 scope.go:117] "RemoveContainer" containerID="cfb2ec80fd83004328cdf174da11f06eb57d3839a8b3d4b0563de3d5f4204387" Dec 12 16:25:59 crc kubenswrapper[4693]: I1212 16:25:59.869589 4693 scope.go:117] "RemoveContainer" containerID="3881cbaeb54bb969750ad936ba7a54cc0d72415c4fa3823a04ea5c3eb32f205d" Dec 12 16:25:59 crc kubenswrapper[4693]: I1212 16:25:59.934525 4693 scope.go:117] "RemoveContainer" containerID="decf2d28d710671f8eecd2570b5ca3183b09dc12e6b450d48ed874bd7344afc9" Dec 12 16:26:05 crc kubenswrapper[4693]: I1212 16:26:05.357048 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:26:05 crc kubenswrapper[4693]: E1212 16:26:05.357861 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:26:19 crc kubenswrapper[4693]: I1212 16:26:19.358244 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:26:19 crc kubenswrapper[4693]: E1212 16:26:19.359811 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:26:25 crc kubenswrapper[4693]: I1212 16:26:25.735142 4693 generic.go:334] "Generic (PLEG): container finished" podID="83c53cdf-8b08-4976-a4ff-3b8e1e921731" containerID="a9d2fd14f9baed7588eafb4c23d10e04246e00b2582a1d0f837f57da81a219df" exitCode=0 Dec 12 16:26:25 crc kubenswrapper[4693]: I1212 16:26:25.735260 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4svb" event={"ID":"83c53cdf-8b08-4976-a4ff-3b8e1e921731","Type":"ContainerDied","Data":"a9d2fd14f9baed7588eafb4c23d10e04246e00b2582a1d0f837f57da81a219df"} Dec 12 16:26:27 crc kubenswrapper[4693]: I1212 16:26:27.275064 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4svb" Dec 12 16:26:27 crc kubenswrapper[4693]: I1212 16:26:27.437549 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx8tj\" (UniqueName: \"kubernetes.io/projected/83c53cdf-8b08-4976-a4ff-3b8e1e921731-kube-api-access-qx8tj\") pod \"83c53cdf-8b08-4976-a4ff-3b8e1e921731\" (UID: \"83c53cdf-8b08-4976-a4ff-3b8e1e921731\") " Dec 12 16:26:27 crc kubenswrapper[4693]: I1212 16:26:27.437952 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83c53cdf-8b08-4976-a4ff-3b8e1e921731-inventory\") pod \"83c53cdf-8b08-4976-a4ff-3b8e1e921731\" (UID: \"83c53cdf-8b08-4976-a4ff-3b8e1e921731\") " Dec 12 16:26:27 crc kubenswrapper[4693]: I1212 16:26:27.438151 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/83c53cdf-8b08-4976-a4ff-3b8e1e921731-ssh-key\") pod \"83c53cdf-8b08-4976-a4ff-3b8e1e921731\" (UID: \"83c53cdf-8b08-4976-a4ff-3b8e1e921731\") " Dec 12 16:26:27 crc kubenswrapper[4693]: I1212 16:26:27.443220 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c53cdf-8b08-4976-a4ff-3b8e1e921731-kube-api-access-qx8tj" (OuterVolumeSpecName: "kube-api-access-qx8tj") pod "83c53cdf-8b08-4976-a4ff-3b8e1e921731" (UID: "83c53cdf-8b08-4976-a4ff-3b8e1e921731"). InnerVolumeSpecName "kube-api-access-qx8tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:26:27 crc kubenswrapper[4693]: I1212 16:26:27.477917 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c53cdf-8b08-4976-a4ff-3b8e1e921731-inventory" (OuterVolumeSpecName: "inventory") pod "83c53cdf-8b08-4976-a4ff-3b8e1e921731" (UID: "83c53cdf-8b08-4976-a4ff-3b8e1e921731"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:26:27 crc kubenswrapper[4693]: I1212 16:26:27.478578 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c53cdf-8b08-4976-a4ff-3b8e1e921731-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "83c53cdf-8b08-4976-a4ff-3b8e1e921731" (UID: "83c53cdf-8b08-4976-a4ff-3b8e1e921731"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:26:27 crc kubenswrapper[4693]: I1212 16:26:27.540686 4693 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/83c53cdf-8b08-4976-a4ff-3b8e1e921731-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:27 crc kubenswrapper[4693]: I1212 16:26:27.540718 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx8tj\" (UniqueName: \"kubernetes.io/projected/83c53cdf-8b08-4976-a4ff-3b8e1e921731-kube-api-access-qx8tj\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:27 crc kubenswrapper[4693]: I1212 16:26:27.540731 4693 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83c53cdf-8b08-4976-a4ff-3b8e1e921731-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:27 crc kubenswrapper[4693]: I1212 16:26:27.789765 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4svb" event={"ID":"83c53cdf-8b08-4976-a4ff-3b8e1e921731","Type":"ContainerDied","Data":"d0c6ab67fa5182ca149013ea0239d1705fdc7c2ae20cfa8549fbcd3b5038ff60"} Dec 12 16:26:27 crc kubenswrapper[4693]: I1212 16:26:27.789829 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0c6ab67fa5182ca149013ea0239d1705fdc7c2ae20cfa8549fbcd3b5038ff60" Dec 12 16:26:27 crc kubenswrapper[4693]: I1212 16:26:27.789957 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4svb" Dec 12 16:26:27 crc kubenswrapper[4693]: I1212 16:26:27.886433 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b"] Dec 12 16:26:27 crc kubenswrapper[4693]: E1212 16:26:27.887061 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c53cdf-8b08-4976-a4ff-3b8e1e921731" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 12 16:26:27 crc kubenswrapper[4693]: I1212 16:26:27.887082 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c53cdf-8b08-4976-a4ff-3b8e1e921731" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 12 16:26:27 crc kubenswrapper[4693]: I1212 16:26:27.887421 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c53cdf-8b08-4976-a4ff-3b8e1e921731" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 12 16:26:27 crc kubenswrapper[4693]: I1212 16:26:27.888407 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b" Dec 12 16:26:27 crc kubenswrapper[4693]: I1212 16:26:27.894392 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 16:26:27 crc kubenswrapper[4693]: I1212 16:26:27.894704 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 16:26:27 crc kubenswrapper[4693]: I1212 16:26:27.894855 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 16:26:27 crc kubenswrapper[4693]: I1212 16:26:27.896956 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlgf7" Dec 12 16:26:27 crc kubenswrapper[4693]: I1212 16:26:27.899216 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b"] Dec 12 16:26:28 crc kubenswrapper[4693]: I1212 16:26:28.056432 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bb223e8c-b291-48c5-a382-f431348243ee-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b\" (UID: \"bb223e8c-b291-48c5-a382-f431348243ee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b" Dec 12 16:26:28 crc kubenswrapper[4693]: I1212 16:26:28.056932 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kht4q\" (UniqueName: \"kubernetes.io/projected/bb223e8c-b291-48c5-a382-f431348243ee-kube-api-access-kht4q\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b\" (UID: \"bb223e8c-b291-48c5-a382-f431348243ee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b" Dec 12 16:26:28 crc kubenswrapper[4693]: I1212 16:26:28.057114 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb223e8c-b291-48c5-a382-f431348243ee-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b\" (UID: \"bb223e8c-b291-48c5-a382-f431348243ee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b" Dec 12 16:26:28 crc kubenswrapper[4693]: I1212 16:26:28.159673 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kht4q\" (UniqueName: \"kubernetes.io/projected/bb223e8c-b291-48c5-a382-f431348243ee-kube-api-access-kht4q\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b\" (UID: \"bb223e8c-b291-48c5-a382-f431348243ee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b" Dec 12 16:26:28 crc kubenswrapper[4693]: I1212 16:26:28.161579 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb223e8c-b291-48c5-a382-f431348243ee-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b\" (UID: \"bb223e8c-b291-48c5-a382-f431348243ee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b" Dec 12 16:26:28 crc kubenswrapper[4693]: I1212 16:26:28.162603 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bb223e8c-b291-48c5-a382-f431348243ee-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b\" (UID: \"bb223e8c-b291-48c5-a382-f431348243ee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b" Dec 12 16:26:28 crc kubenswrapper[4693]: I1212 16:26:28.166444 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bb223e8c-b291-48c5-a382-f431348243ee-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b\" (UID: \"bb223e8c-b291-48c5-a382-f431348243ee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b" Dec 12 16:26:28 crc kubenswrapper[4693]: I1212 16:26:28.166639 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb223e8c-b291-48c5-a382-f431348243ee-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b\" (UID: \"bb223e8c-b291-48c5-a382-f431348243ee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b" Dec 12 16:26:28 crc kubenswrapper[4693]: I1212 16:26:28.177394 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kht4q\" (UniqueName: \"kubernetes.io/projected/bb223e8c-b291-48c5-a382-f431348243ee-kube-api-access-kht4q\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b\" (UID: \"bb223e8c-b291-48c5-a382-f431348243ee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b" Dec 12 16:26:28 crc kubenswrapper[4693]: I1212 16:26:28.215295 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b" Dec 12 16:26:28 crc kubenswrapper[4693]: I1212 16:26:28.760789 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b"] Dec 12 16:26:28 crc kubenswrapper[4693]: I1212 16:26:28.763636 4693 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 16:26:28 crc kubenswrapper[4693]: I1212 16:26:28.801185 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b" event={"ID":"bb223e8c-b291-48c5-a382-f431348243ee","Type":"ContainerStarted","Data":"9d7865ae7be260e17717f2b347fb3622c46a25327b23dc73929edbf8e14b00a6"} Dec 12 16:26:29 crc kubenswrapper[4693]: I1212 16:26:29.814987 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b" event={"ID":"bb223e8c-b291-48c5-a382-f431348243ee","Type":"ContainerStarted","Data":"8f975efe1762e889bd04decca5dc756f8e93ce27b1d2f96d1b1636806e913dcd"} Dec 12 16:26:29 crc kubenswrapper[4693]: I1212 16:26:29.834942 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b" podStartSLOduration=2.244331653 podStartE2EDuration="2.834906058s" podCreationTimestamp="2025-12-12 16:26:27 +0000 UTC" firstStartedPulling="2025-12-12 16:26:28.763415842 +0000 UTC m=+2415.932055443" lastFinishedPulling="2025-12-12 16:26:29.353990237 +0000 UTC m=+2416.522629848" observedRunningTime="2025-12-12 16:26:29.82872063 +0000 UTC m=+2416.997360241" watchObservedRunningTime="2025-12-12 16:26:29.834906058 +0000 UTC m=+2417.003545649" Dec 12 16:26:30 crc kubenswrapper[4693]: I1212 16:26:30.357327 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:26:30 crc kubenswrapper[4693]: E1212 16:26:30.357641 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:26:34 crc kubenswrapper[4693]: I1212 16:26:34.885665 4693 generic.go:334] "Generic (PLEG): container finished" podID="bb223e8c-b291-48c5-a382-f431348243ee" containerID="8f975efe1762e889bd04decca5dc756f8e93ce27b1d2f96d1b1636806e913dcd" exitCode=0 Dec 12 16:26:34 crc kubenswrapper[4693]: I1212 16:26:34.885803 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b" event={"ID":"bb223e8c-b291-48c5-a382-f431348243ee","Type":"ContainerDied","Data":"8f975efe1762e889bd04decca5dc756f8e93ce27b1d2f96d1b1636806e913dcd"} Dec 12 16:26:36 crc kubenswrapper[4693]: I1212 16:26:36.476866 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b" Dec 12 16:26:36 crc kubenswrapper[4693]: I1212 16:26:36.481455 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bb223e8c-b291-48c5-a382-f431348243ee-ssh-key\") pod \"bb223e8c-b291-48c5-a382-f431348243ee\" (UID: \"bb223e8c-b291-48c5-a382-f431348243ee\") " Dec 12 16:26:36 crc kubenswrapper[4693]: I1212 16:26:36.481541 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb223e8c-b291-48c5-a382-f431348243ee-inventory\") pod \"bb223e8c-b291-48c5-a382-f431348243ee\" (UID: \"bb223e8c-b291-48c5-a382-f431348243ee\") " Dec 12 16:26:36 crc kubenswrapper[4693]: I1212 16:26:36.482069 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kht4q\" (UniqueName: \"kubernetes.io/projected/bb223e8c-b291-48c5-a382-f431348243ee-kube-api-access-kht4q\") pod \"bb223e8c-b291-48c5-a382-f431348243ee\" (UID: \"bb223e8c-b291-48c5-a382-f431348243ee\") " Dec 12 16:26:36 crc kubenswrapper[4693]: I1212 16:26:36.486971 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb223e8c-b291-48c5-a382-f431348243ee-kube-api-access-kht4q" (OuterVolumeSpecName: "kube-api-access-kht4q") pod "bb223e8c-b291-48c5-a382-f431348243ee" (UID: "bb223e8c-b291-48c5-a382-f431348243ee"). InnerVolumeSpecName "kube-api-access-kht4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:26:36 crc kubenswrapper[4693]: I1212 16:26:36.518380 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb223e8c-b291-48c5-a382-f431348243ee-inventory" (OuterVolumeSpecName: "inventory") pod "bb223e8c-b291-48c5-a382-f431348243ee" (UID: "bb223e8c-b291-48c5-a382-f431348243ee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:26:36 crc kubenswrapper[4693]: I1212 16:26:36.530422 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb223e8c-b291-48c5-a382-f431348243ee-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bb223e8c-b291-48c5-a382-f431348243ee" (UID: "bb223e8c-b291-48c5-a382-f431348243ee"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:26:36 crc kubenswrapper[4693]: I1212 16:26:36.585625 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kht4q\" (UniqueName: \"kubernetes.io/projected/bb223e8c-b291-48c5-a382-f431348243ee-kube-api-access-kht4q\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:36 crc kubenswrapper[4693]: I1212 16:26:36.585667 4693 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bb223e8c-b291-48c5-a382-f431348243ee-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:36 crc kubenswrapper[4693]: I1212 16:26:36.585676 4693 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb223e8c-b291-48c5-a382-f431348243ee-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:36 crc kubenswrapper[4693]: I1212 16:26:36.916684 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b" event={"ID":"bb223e8c-b291-48c5-a382-f431348243ee","Type":"ContainerDied","Data":"9d7865ae7be260e17717f2b347fb3622c46a25327b23dc73929edbf8e14b00a6"} Dec 12 16:26:36 crc kubenswrapper[4693]: I1212 16:26:36.916744 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d7865ae7be260e17717f2b347fb3622c46a25327b23dc73929edbf8e14b00a6" Dec 12 16:26:36 crc kubenswrapper[4693]: I1212 16:26:36.917265 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7sf6b" Dec 12 16:26:37 crc kubenswrapper[4693]: I1212 16:26:37.031357 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-s4tkv"] Dec 12 16:26:37 crc kubenswrapper[4693]: E1212 16:26:37.031831 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb223e8c-b291-48c5-a382-f431348243ee" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 12 16:26:37 crc kubenswrapper[4693]: I1212 16:26:37.031849 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb223e8c-b291-48c5-a382-f431348243ee" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 12 16:26:37 crc kubenswrapper[4693]: I1212 16:26:37.032127 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb223e8c-b291-48c5-a382-f431348243ee" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 12 16:26:37 crc kubenswrapper[4693]: I1212 16:26:37.033124 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s4tkv" Dec 12 16:26:37 crc kubenswrapper[4693]: I1212 16:26:37.036068 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlgf7" Dec 12 16:26:37 crc kubenswrapper[4693]: I1212 16:26:37.046678 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 16:26:37 crc kubenswrapper[4693]: I1212 16:26:37.046909 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 16:26:37 crc kubenswrapper[4693]: I1212 16:26:37.047175 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 16:26:37 crc kubenswrapper[4693]: I1212 16:26:37.050901 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-s4tkv"] Dec 12 16:26:37 crc kubenswrapper[4693]: I1212 16:26:37.100734 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1ff98c9-e61b-4ebb-ac63-492868f4da29-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s4tkv\" (UID: \"a1ff98c9-e61b-4ebb-ac63-492868f4da29\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s4tkv" Dec 12 16:26:37 crc kubenswrapper[4693]: I1212 16:26:37.101718 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2b26\" (UniqueName: \"kubernetes.io/projected/a1ff98c9-e61b-4ebb-ac63-492868f4da29-kube-api-access-f2b26\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s4tkv\" (UID: \"a1ff98c9-e61b-4ebb-ac63-492868f4da29\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s4tkv" Dec 12 16:26:37 crc kubenswrapper[4693]: I1212 16:26:37.101917 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1ff98c9-e61b-4ebb-ac63-492868f4da29-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s4tkv\" (UID: \"a1ff98c9-e61b-4ebb-ac63-492868f4da29\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s4tkv" Dec 12 16:26:37 crc kubenswrapper[4693]: I1212 16:26:37.204447 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1ff98c9-e61b-4ebb-ac63-492868f4da29-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s4tkv\" (UID: \"a1ff98c9-e61b-4ebb-ac63-492868f4da29\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s4tkv" Dec 12 16:26:37 crc kubenswrapper[4693]: I1212 16:26:37.204515 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2b26\" (UniqueName: \"kubernetes.io/projected/a1ff98c9-e61b-4ebb-ac63-492868f4da29-kube-api-access-f2b26\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s4tkv\" (UID: \"a1ff98c9-e61b-4ebb-ac63-492868f4da29\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s4tkv" Dec 12 16:26:37 crc kubenswrapper[4693]: I1212 16:26:37.204599 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1ff98c9-e61b-4ebb-ac63-492868f4da29-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s4tkv\" (UID: \"a1ff98c9-e61b-4ebb-ac63-492868f4da29\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s4tkv" Dec 12 16:26:37 crc kubenswrapper[4693]: I1212 16:26:37.210261 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1ff98c9-e61b-4ebb-ac63-492868f4da29-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s4tkv\" (UID: \"a1ff98c9-e61b-4ebb-ac63-492868f4da29\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s4tkv" Dec 12 16:26:37 crc kubenswrapper[4693]: I1212 16:26:37.217340 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1ff98c9-e61b-4ebb-ac63-492868f4da29-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s4tkv\" (UID: \"a1ff98c9-e61b-4ebb-ac63-492868f4da29\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s4tkv" Dec 12 16:26:37 crc kubenswrapper[4693]: I1212 16:26:37.223493 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2b26\" (UniqueName: \"kubernetes.io/projected/a1ff98c9-e61b-4ebb-ac63-492868f4da29-kube-api-access-f2b26\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s4tkv\" (UID: \"a1ff98c9-e61b-4ebb-ac63-492868f4da29\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s4tkv" Dec 12 16:26:37 crc kubenswrapper[4693]: I1212 16:26:37.363386 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s4tkv" Dec 12 16:26:37 crc kubenswrapper[4693]: I1212 16:26:37.992761 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-s4tkv"] Dec 12 16:26:37 crc kubenswrapper[4693]: W1212 16:26:37.994995 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1ff98c9_e61b_4ebb_ac63_492868f4da29.slice/crio-d8279b842bb3ea072552c220d9b478b64824284806600cc0e2c4f2f7c58e0917 WatchSource:0}: Error finding container d8279b842bb3ea072552c220d9b478b64824284806600cc0e2c4f2f7c58e0917: Status 404 returned error can't find the container with id d8279b842bb3ea072552c220d9b478b64824284806600cc0e2c4f2f7c58e0917 Dec 12 16:26:38 crc kubenswrapper[4693]: I1212 16:26:38.956254 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s4tkv" event={"ID":"a1ff98c9-e61b-4ebb-ac63-492868f4da29","Type":"ContainerStarted","Data":"c40fae5b63f2f3f1f41d57484a6f5648490250b199b8711f103c763930896d5e"} Dec 12 16:26:38 crc kubenswrapper[4693]: I1212 16:26:38.956816 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s4tkv" event={"ID":"a1ff98c9-e61b-4ebb-ac63-492868f4da29","Type":"ContainerStarted","Data":"d8279b842bb3ea072552c220d9b478b64824284806600cc0e2c4f2f7c58e0917"} Dec 12 16:26:38 crc kubenswrapper[4693]: I1212 16:26:38.985759 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s4tkv" podStartSLOduration=1.495636538 podStartE2EDuration="1.985737557s" podCreationTimestamp="2025-12-12 16:26:37 +0000 UTC" firstStartedPulling="2025-12-12 16:26:37.998763886 +0000 UTC m=+2425.167403497" lastFinishedPulling="2025-12-12 16:26:38.488864915 +0000 UTC m=+2425.657504516" observedRunningTime="2025-12-12 16:26:38.977941816 +0000 UTC m=+2426.146581427" watchObservedRunningTime="2025-12-12 16:26:38.985737557 +0000 UTC m=+2426.154377158" Dec 12 16:26:42 crc kubenswrapper[4693]: I1212 16:26:42.358064 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:26:42 crc kubenswrapper[4693]: E1212 16:26:42.359238 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:26:57 crc kubenswrapper[4693]: I1212 16:26:57.357348 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:26:57 crc kubenswrapper[4693]: E1212 16:26:57.358199 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:27:11 crc kubenswrapper[4693]: I1212 16:27:11.357959 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:27:11 crc kubenswrapper[4693]: E1212 16:27:11.358867 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:27:21 crc kubenswrapper[4693]: I1212 16:27:21.420441 4693 generic.go:334] "Generic (PLEG): container finished" podID="a1ff98c9-e61b-4ebb-ac63-492868f4da29" containerID="c40fae5b63f2f3f1f41d57484a6f5648490250b199b8711f103c763930896d5e" exitCode=0 Dec 12 16:27:21 crc kubenswrapper[4693]: I1212 16:27:21.420960 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s4tkv" event={"ID":"a1ff98c9-e61b-4ebb-ac63-492868f4da29","Type":"ContainerDied","Data":"c40fae5b63f2f3f1f41d57484a6f5648490250b199b8711f103c763930896d5e"} Dec 12 16:27:22 crc kubenswrapper[4693]: I1212 16:27:22.961443 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s4tkv" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.075149 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1ff98c9-e61b-4ebb-ac63-492868f4da29-ssh-key\") pod \"a1ff98c9-e61b-4ebb-ac63-492868f4da29\" (UID: \"a1ff98c9-e61b-4ebb-ac63-492868f4da29\") " Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.075217 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1ff98c9-e61b-4ebb-ac63-492868f4da29-inventory\") pod \"a1ff98c9-e61b-4ebb-ac63-492868f4da29\" (UID: \"a1ff98c9-e61b-4ebb-ac63-492868f4da29\") " Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.075329 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2b26\" (UniqueName: \"kubernetes.io/projected/a1ff98c9-e61b-4ebb-ac63-492868f4da29-kube-api-access-f2b26\") pod \"a1ff98c9-e61b-4ebb-ac63-492868f4da29\" (UID: \"a1ff98c9-e61b-4ebb-ac63-492868f4da29\") " Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.081643 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1ff98c9-e61b-4ebb-ac63-492868f4da29-kube-api-access-f2b26" (OuterVolumeSpecName: "kube-api-access-f2b26") pod "a1ff98c9-e61b-4ebb-ac63-492868f4da29" (UID: "a1ff98c9-e61b-4ebb-ac63-492868f4da29"). InnerVolumeSpecName "kube-api-access-f2b26". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.111640 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ff98c9-e61b-4ebb-ac63-492868f4da29-inventory" (OuterVolumeSpecName: "inventory") pod "a1ff98c9-e61b-4ebb-ac63-492868f4da29" (UID: "a1ff98c9-e61b-4ebb-ac63-492868f4da29"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.128566 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ff98c9-e61b-4ebb-ac63-492868f4da29-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a1ff98c9-e61b-4ebb-ac63-492868f4da29" (UID: "a1ff98c9-e61b-4ebb-ac63-492868f4da29"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.178585 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2b26\" (UniqueName: \"kubernetes.io/projected/a1ff98c9-e61b-4ebb-ac63-492868f4da29-kube-api-access-f2b26\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.178624 4693 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1ff98c9-e61b-4ebb-ac63-492868f4da29-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.178659 4693 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1ff98c9-e61b-4ebb-ac63-492868f4da29-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.369161 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:27:23 crc kubenswrapper[4693]: E1212 16:27:23.369700 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.447341 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s4tkv" event={"ID":"a1ff98c9-e61b-4ebb-ac63-492868f4da29","Type":"ContainerDied","Data":"d8279b842bb3ea072552c220d9b478b64824284806600cc0e2c4f2f7c58e0917"} Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.447385 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8279b842bb3ea072552c220d9b478b64824284806600cc0e2c4f2f7c58e0917" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.447464 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s4tkv" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.548373 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9"] Dec 12 16:27:23 crc kubenswrapper[4693]: E1212 16:27:23.548973 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ff98c9-e61b-4ebb-ac63-492868f4da29" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.548996 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ff98c9-e61b-4ebb-ac63-492868f4da29" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.549285 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ff98c9-e61b-4ebb-ac63-492868f4da29" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.550404 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.553906 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.553999 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.554139 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlgf7" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.558066 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.563524 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9"] Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.587535 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9\" (UID: \"a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.587600 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmdbw\" (UniqueName: \"kubernetes.io/projected/a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1-kube-api-access-hmdbw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9\" (UID: \"a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.587735 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9\" (UID: \"a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.690601 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9\" (UID: \"a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.690783 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9\" (UID: \"a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.690829 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmdbw\" (UniqueName: \"kubernetes.io/projected/a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1-kube-api-access-hmdbw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9\" (UID: \"a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.695922 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9\" (UID: \"a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.697305 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9\" (UID: \"a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.716387 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmdbw\" (UniqueName: \"kubernetes.io/projected/a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1-kube-api-access-hmdbw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9\" (UID: \"a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9" Dec 12 16:27:23 crc kubenswrapper[4693]: I1212 16:27:23.882147 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9" Dec 12 16:27:24 crc kubenswrapper[4693]: I1212 16:27:24.569633 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9"] Dec 12 16:27:25 crc kubenswrapper[4693]: I1212 16:27:25.471684 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9" event={"ID":"a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1","Type":"ContainerStarted","Data":"4846455e02c5c1c81bd370270c61557a822dc759c3ad0882ea6ea0bc9ecbea44"} Dec 12 16:27:25 crc kubenswrapper[4693]: I1212 16:27:25.472058 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9" event={"ID":"a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1","Type":"ContainerStarted","Data":"3141f9afcefed51d505d5b479ee8543834fd6e089b53f1638ec022c2df1f8c8a"} Dec 12 16:27:25 crc kubenswrapper[4693]: I1212 16:27:25.520055 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9" podStartSLOduration=1.9125522209999999 podStartE2EDuration="2.520024659s" podCreationTimestamp="2025-12-12 16:27:23 +0000 UTC" firstStartedPulling="2025-12-12 16:27:24.575289712 +0000 UTC m=+2471.743929333" lastFinishedPulling="2025-12-12 16:27:25.18276217 +0000 UTC m=+2472.351401771" observedRunningTime="2025-12-12 16:27:25.492844035 +0000 UTC m=+2472.661483636" watchObservedRunningTime="2025-12-12 16:27:25.520024659 +0000 UTC m=+2472.688664260" Dec 12 16:27:38 crc kubenswrapper[4693]: I1212 16:27:38.358305 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:27:38 crc kubenswrapper[4693]: E1212 16:27:38.359843 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:27:41 crc kubenswrapper[4693]: I1212 16:27:41.058907 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-h5zgl"] Dec 12 16:27:41 crc kubenswrapper[4693]: I1212 16:27:41.069236 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-h5zgl"] Dec 12 16:27:41 crc kubenswrapper[4693]: I1212 16:27:41.373709 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e457e88c-30e2-45af-8a1c-d3056402343b" path="/var/lib/kubelet/pods/e457e88c-30e2-45af-8a1c-d3056402343b/volumes" Dec 12 16:27:52 crc kubenswrapper[4693]: I1212 16:27:52.357659 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:27:52 crc kubenswrapper[4693]: E1212 16:27:52.358654 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:28:00 crc kubenswrapper[4693]: I1212 16:28:00.120186 4693 scope.go:117] "RemoveContainer" containerID="bf5ca634fda2b51cb09dee2780442f1cbc4a786b9e3bbca86aa5aa70383f169d" Dec 12 16:28:06 crc kubenswrapper[4693]: I1212 16:28:06.358396 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:28:06 crc kubenswrapper[4693]: E1212 16:28:06.360210 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:28:21 crc kubenswrapper[4693]: I1212 16:28:21.148070 4693 generic.go:334] "Generic (PLEG): container finished" podID="a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1" containerID="4846455e02c5c1c81bd370270c61557a822dc759c3ad0882ea6ea0bc9ecbea44" exitCode=0 Dec 12 16:28:21 crc kubenswrapper[4693]: I1212 16:28:21.148290 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9" event={"ID":"a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1","Type":"ContainerDied","Data":"4846455e02c5c1c81bd370270c61557a822dc759c3ad0882ea6ea0bc9ecbea44"} Dec 12 16:28:21 crc kubenswrapper[4693]: I1212 16:28:21.358952 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:28:21 crc kubenswrapper[4693]: E1212 16:28:21.359261 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:28:22 crc kubenswrapper[4693]: I1212 16:28:22.626689 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9" Dec 12 16:28:22 crc kubenswrapper[4693]: I1212 16:28:22.689260 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmdbw\" (UniqueName: \"kubernetes.io/projected/a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1-kube-api-access-hmdbw\") pod \"a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1\" (UID: \"a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1\") " Dec 12 16:28:22 crc kubenswrapper[4693]: I1212 16:28:22.689504 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1-ssh-key\") pod \"a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1\" (UID: \"a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1\") " Dec 12 16:28:22 crc kubenswrapper[4693]: I1212 16:28:22.689723 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1-inventory\") pod \"a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1\" (UID: \"a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1\") " Dec 12 16:28:22 crc kubenswrapper[4693]: I1212 16:28:22.697933 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1-kube-api-access-hmdbw" (OuterVolumeSpecName: "kube-api-access-hmdbw") pod "a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1" (UID: "a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1"). InnerVolumeSpecName "kube-api-access-hmdbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:28:22 crc kubenswrapper[4693]: I1212 16:28:22.732169 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1-inventory" (OuterVolumeSpecName: "inventory") pod "a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1" (UID: "a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:28:22 crc kubenswrapper[4693]: I1212 16:28:22.741906 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1" (UID: "a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:28:22 crc kubenswrapper[4693]: I1212 16:28:22.800434 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmdbw\" (UniqueName: \"kubernetes.io/projected/a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1-kube-api-access-hmdbw\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:22 crc kubenswrapper[4693]: I1212 16:28:22.800474 4693 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:22 crc kubenswrapper[4693]: I1212 16:28:22.800487 4693 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:23 crc kubenswrapper[4693]: I1212 16:28:23.174992 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9" event={"ID":"a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1","Type":"ContainerDied","Data":"3141f9afcefed51d505d5b479ee8543834fd6e089b53f1638ec022c2df1f8c8a"} Dec 12 16:28:23 crc kubenswrapper[4693]: I1212 16:28:23.175067 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3141f9afcefed51d505d5b479ee8543834fd6e089b53f1638ec022c2df1f8c8a" Dec 12 16:28:23 crc kubenswrapper[4693]: I1212 16:28:23.175084 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jrm9" Dec 12 16:28:23 crc kubenswrapper[4693]: I1212 16:28:23.269620 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cqfkt"] Dec 12 16:28:23 crc kubenswrapper[4693]: E1212 16:28:23.270529 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 12 16:28:23 crc kubenswrapper[4693]: I1212 16:28:23.270641 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 12 16:28:23 crc kubenswrapper[4693]: I1212 16:28:23.271049 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d16ed4-c27c-4596-a4d4-b6ecccfc33e1" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 12 16:28:23 crc kubenswrapper[4693]: I1212 16:28:23.272350 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cqfkt" Dec 12 16:28:23 crc kubenswrapper[4693]: I1212 16:28:23.283882 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlgf7" Dec 12 16:28:23 crc kubenswrapper[4693]: I1212 16:28:23.283882 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 16:28:23 crc kubenswrapper[4693]: I1212 16:28:23.284011 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 16:28:23 crc kubenswrapper[4693]: I1212 16:28:23.284051 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 16:28:23 crc kubenswrapper[4693]: I1212 16:28:23.288316 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cqfkt"] Dec 12 16:28:23 crc kubenswrapper[4693]: I1212 16:28:23.314132 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5d49dd9-173a-4010-877c-c5486e61e262-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cqfkt\" (UID: \"b5d49dd9-173a-4010-877c-c5486e61e262\") " pod="openstack/ssh-known-hosts-edpm-deployment-cqfkt" Dec 12 16:28:23 crc kubenswrapper[4693]: I1212 16:28:23.315042 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9tss\" (UniqueName: \"kubernetes.io/projected/b5d49dd9-173a-4010-877c-c5486e61e262-kube-api-access-s9tss\") pod \"ssh-known-hosts-edpm-deployment-cqfkt\" (UID: \"b5d49dd9-173a-4010-877c-c5486e61e262\") " pod="openstack/ssh-known-hosts-edpm-deployment-cqfkt" Dec 12 16:28:23 crc kubenswrapper[4693]: I1212 16:28:23.315348 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b5d49dd9-173a-4010-877c-c5486e61e262-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cqfkt\" (UID: \"b5d49dd9-173a-4010-877c-c5486e61e262\") " pod="openstack/ssh-known-hosts-edpm-deployment-cqfkt" Dec 12 16:28:23 crc kubenswrapper[4693]: I1212 16:28:23.418838 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9tss\" (UniqueName: \"kubernetes.io/projected/b5d49dd9-173a-4010-877c-c5486e61e262-kube-api-access-s9tss\") pod \"ssh-known-hosts-edpm-deployment-cqfkt\" (UID: \"b5d49dd9-173a-4010-877c-c5486e61e262\") " pod="openstack/ssh-known-hosts-edpm-deployment-cqfkt" Dec 12 16:28:23 crc kubenswrapper[4693]: I1212 16:28:23.419054 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b5d49dd9-173a-4010-877c-c5486e61e262-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cqfkt\" (UID: \"b5d49dd9-173a-4010-877c-c5486e61e262\") " pod="openstack/ssh-known-hosts-edpm-deployment-cqfkt" Dec 12 16:28:23 crc kubenswrapper[4693]: I1212 16:28:23.419115 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5d49dd9-173a-4010-877c-c5486e61e262-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cqfkt\" (UID: \"b5d49dd9-173a-4010-877c-c5486e61e262\") " pod="openstack/ssh-known-hosts-edpm-deployment-cqfkt" Dec 12 16:28:23 crc kubenswrapper[4693]: I1212 16:28:23.424703 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5d49dd9-173a-4010-877c-c5486e61e262-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cqfkt\" (UID: \"b5d49dd9-173a-4010-877c-c5486e61e262\") " pod="openstack/ssh-known-hosts-edpm-deployment-cqfkt" Dec 12 16:28:23 crc kubenswrapper[4693]: I1212 16:28:23.428170 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b5d49dd9-173a-4010-877c-c5486e61e262-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cqfkt\" (UID: \"b5d49dd9-173a-4010-877c-c5486e61e262\") " pod="openstack/ssh-known-hosts-edpm-deployment-cqfkt" Dec 12 16:28:23 crc kubenswrapper[4693]: I1212 16:28:23.443339 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9tss\" (UniqueName: \"kubernetes.io/projected/b5d49dd9-173a-4010-877c-c5486e61e262-kube-api-access-s9tss\") pod \"ssh-known-hosts-edpm-deployment-cqfkt\" (UID: \"b5d49dd9-173a-4010-877c-c5486e61e262\") " pod="openstack/ssh-known-hosts-edpm-deployment-cqfkt" Dec 12 16:28:23 crc kubenswrapper[4693]: I1212 16:28:23.625593 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cqfkt" Dec 12 16:28:24 crc kubenswrapper[4693]: I1212 16:28:24.232378 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cqfkt"] Dec 12 16:28:25 crc kubenswrapper[4693]: I1212 16:28:25.193556 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cqfkt" event={"ID":"b5d49dd9-173a-4010-877c-c5486e61e262","Type":"ContainerStarted","Data":"262d566d70176bfe02f5b0999c97a6883b7082dcd82b95e2d0145a40db9dd796"} Dec 12 16:28:25 crc kubenswrapper[4693]: I1212 16:28:25.194446 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cqfkt" event={"ID":"b5d49dd9-173a-4010-877c-c5486e61e262","Type":"ContainerStarted","Data":"98f23a2f37d1a062e8603f85028c7ec3e91003f1d6c01015cf46c0ed70c854af"} Dec 12 16:28:25 crc kubenswrapper[4693]: I1212 16:28:25.220639 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-cqfkt" podStartSLOduration=1.575333043 podStartE2EDuration="2.220620852s" podCreationTimestamp="2025-12-12 16:28:23 +0000 UTC" firstStartedPulling="2025-12-12 16:28:24.238829464 +0000 UTC m=+2531.407469065" lastFinishedPulling="2025-12-12 16:28:24.884117253 +0000 UTC m=+2532.052756874" observedRunningTime="2025-12-12 16:28:25.211868885 +0000 UTC m=+2532.380508506" watchObservedRunningTime="2025-12-12 16:28:25.220620852 +0000 UTC m=+2532.389260453" Dec 12 16:28:30 crc kubenswrapper[4693]: I1212 16:28:30.052196 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-p4r6w"] Dec 12 16:28:30 crc kubenswrapper[4693]: I1212 16:28:30.066405 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-p4r6w"] Dec 12 16:28:31 crc kubenswrapper[4693]: I1212 16:28:31.368336 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a78008e-b445-4968-b856-0ce60d97383f" path="/var/lib/kubelet/pods/6a78008e-b445-4968-b856-0ce60d97383f/volumes" Dec 12 16:28:33 crc kubenswrapper[4693]: I1212 16:28:33.277942 4693 generic.go:334] "Generic (PLEG): container finished" podID="b5d49dd9-173a-4010-877c-c5486e61e262" containerID="262d566d70176bfe02f5b0999c97a6883b7082dcd82b95e2d0145a40db9dd796" exitCode=0 Dec 12 16:28:33 crc kubenswrapper[4693]: I1212 16:28:33.278023 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cqfkt" event={"ID":"b5d49dd9-173a-4010-877c-c5486e61e262","Type":"ContainerDied","Data":"262d566d70176bfe02f5b0999c97a6883b7082dcd82b95e2d0145a40db9dd796"} Dec 12 16:28:34 crc kubenswrapper[4693]: I1212 16:28:34.798328 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cqfkt" Dec 12 16:28:34 crc kubenswrapper[4693]: I1212 16:28:34.907125 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5d49dd9-173a-4010-877c-c5486e61e262-ssh-key-openstack-edpm-ipam\") pod \"b5d49dd9-173a-4010-877c-c5486e61e262\" (UID: \"b5d49dd9-173a-4010-877c-c5486e61e262\") " Dec 12 16:28:34 crc kubenswrapper[4693]: I1212 16:28:34.907338 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9tss\" (UniqueName: \"kubernetes.io/projected/b5d49dd9-173a-4010-877c-c5486e61e262-kube-api-access-s9tss\") pod \"b5d49dd9-173a-4010-877c-c5486e61e262\" (UID: \"b5d49dd9-173a-4010-877c-c5486e61e262\") " Dec 12 16:28:34 crc kubenswrapper[4693]: I1212 16:28:34.907521 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b5d49dd9-173a-4010-877c-c5486e61e262-inventory-0\") pod \"b5d49dd9-173a-4010-877c-c5486e61e262\" (UID: \"b5d49dd9-173a-4010-877c-c5486e61e262\") " Dec 12 16:28:34 crc kubenswrapper[4693]: I1212 16:28:34.915992 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5d49dd9-173a-4010-877c-c5486e61e262-kube-api-access-s9tss" (OuterVolumeSpecName: "kube-api-access-s9tss") pod "b5d49dd9-173a-4010-877c-c5486e61e262" (UID: "b5d49dd9-173a-4010-877c-c5486e61e262"). InnerVolumeSpecName "kube-api-access-s9tss". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:28:34 crc kubenswrapper[4693]: I1212 16:28:34.942463 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5d49dd9-173a-4010-877c-c5486e61e262-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "b5d49dd9-173a-4010-877c-c5486e61e262" (UID: "b5d49dd9-173a-4010-877c-c5486e61e262"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:28:34 crc kubenswrapper[4693]: I1212 16:28:34.949553 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5d49dd9-173a-4010-877c-c5486e61e262-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b5d49dd9-173a-4010-877c-c5486e61e262" (UID: "b5d49dd9-173a-4010-877c-c5486e61e262"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.010879 4693 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b5d49dd9-173a-4010-877c-c5486e61e262-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.010952 4693 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5d49dd9-173a-4010-877c-c5486e61e262-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.010970 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9tss\" (UniqueName: \"kubernetes.io/projected/b5d49dd9-173a-4010-877c-c5486e61e262-kube-api-access-s9tss\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.375103 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.382474 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cqfkt" Dec 12 16:28:35 crc kubenswrapper[4693]: E1212 16:28:35.383199 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.509975 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cqfkt" event={"ID":"b5d49dd9-173a-4010-877c-c5486e61e262","Type":"ContainerDied","Data":"98f23a2f37d1a062e8603f85028c7ec3e91003f1d6c01015cf46c0ed70c854af"} Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.510345 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98f23a2f37d1a062e8603f85028c7ec3e91003f1d6c01015cf46c0ed70c854af" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.540010 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-sqkws"] Dec 12 16:28:35 crc kubenswrapper[4693]: E1212 16:28:35.541210 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5d49dd9-173a-4010-877c-c5486e61e262" containerName="ssh-known-hosts-edpm-deployment" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.541225 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5d49dd9-173a-4010-877c-c5486e61e262" containerName="ssh-known-hosts-edpm-deployment" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.541507 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5d49dd9-173a-4010-877c-c5486e61e262" containerName="ssh-known-hosts-edpm-deployment" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.542365 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-sqkws" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.548164 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.548548 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.549110 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlgf7" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.549254 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.571415 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-sqkws"] Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.658910 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p45sb\" (UniqueName: \"kubernetes.io/projected/d9276ff6-eeab-4323-9ddd-067707c7bde0-kube-api-access-p45sb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-sqkws\" (UID: \"d9276ff6-eeab-4323-9ddd-067707c7bde0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-sqkws" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.659019 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9276ff6-eeab-4323-9ddd-067707c7bde0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-sqkws\" (UID: \"d9276ff6-eeab-4323-9ddd-067707c7bde0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-sqkws" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.659369 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9276ff6-eeab-4323-9ddd-067707c7bde0-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-sqkws\" (UID: \"d9276ff6-eeab-4323-9ddd-067707c7bde0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-sqkws" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.762221 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9276ff6-eeab-4323-9ddd-067707c7bde0-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-sqkws\" (UID: \"d9276ff6-eeab-4323-9ddd-067707c7bde0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-sqkws" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.762490 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p45sb\" (UniqueName: \"kubernetes.io/projected/d9276ff6-eeab-4323-9ddd-067707c7bde0-kube-api-access-p45sb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-sqkws\" (UID: \"d9276ff6-eeab-4323-9ddd-067707c7bde0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-sqkws" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.762544 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9276ff6-eeab-4323-9ddd-067707c7bde0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-sqkws\" (UID: \"d9276ff6-eeab-4323-9ddd-067707c7bde0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-sqkws" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.771028 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9276ff6-eeab-4323-9ddd-067707c7bde0-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-sqkws\" (UID: \"d9276ff6-eeab-4323-9ddd-067707c7bde0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-sqkws" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.777101 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9276ff6-eeab-4323-9ddd-067707c7bde0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-sqkws\" (UID: \"d9276ff6-eeab-4323-9ddd-067707c7bde0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-sqkws" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.784820 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p45sb\" (UniqueName: \"kubernetes.io/projected/d9276ff6-eeab-4323-9ddd-067707c7bde0-kube-api-access-p45sb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-sqkws\" (UID: \"d9276ff6-eeab-4323-9ddd-067707c7bde0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-sqkws" Dec 12 16:28:35 crc kubenswrapper[4693]: I1212 16:28:35.874563 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-sqkws" Dec 12 16:28:36 crc kubenswrapper[4693]: I1212 16:28:36.483973 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-sqkws"] Dec 12 16:28:36 crc kubenswrapper[4693]: W1212 16:28:36.484155 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9276ff6_eeab_4323_9ddd_067707c7bde0.slice/crio-4acf86fa8624638488f47f7ca94a93fc04befcec6ad4422042c328c4ba99955a WatchSource:0}: Error finding container 4acf86fa8624638488f47f7ca94a93fc04befcec6ad4422042c328c4ba99955a: Status 404 returned error can't find the container with id 4acf86fa8624638488f47f7ca94a93fc04befcec6ad4422042c328c4ba99955a Dec 12 16:28:37 crc kubenswrapper[4693]: I1212 16:28:37.407368 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-sqkws" event={"ID":"d9276ff6-eeab-4323-9ddd-067707c7bde0","Type":"ContainerStarted","Data":"64d70cd0bff7820266e6f136572d9871085c465f27ab268e32d2c07e554903f2"} Dec 12 16:28:37 crc kubenswrapper[4693]: I1212 16:28:37.408110 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-sqkws" event={"ID":"d9276ff6-eeab-4323-9ddd-067707c7bde0","Type":"ContainerStarted","Data":"4acf86fa8624638488f47f7ca94a93fc04befcec6ad4422042c328c4ba99955a"} Dec 12 16:28:37 crc kubenswrapper[4693]: I1212 16:28:37.433489 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-sqkws" podStartSLOduration=1.997942411 podStartE2EDuration="2.433464044s" podCreationTimestamp="2025-12-12 16:28:35 +0000 UTC" firstStartedPulling="2025-12-12 16:28:36.486873348 +0000 UTC m=+2543.655512949" lastFinishedPulling="2025-12-12 16:28:36.922394961 +0000 UTC m=+2544.091034582" observedRunningTime="2025-12-12 16:28:37.426794214 +0000 UTC m=+2544.595433875" watchObservedRunningTime="2025-12-12 16:28:37.433464044 +0000 UTC m=+2544.602103665" Dec 12 16:28:47 crc kubenswrapper[4693]: I1212 16:28:47.840446 4693 generic.go:334] "Generic (PLEG): container finished" podID="d9276ff6-eeab-4323-9ddd-067707c7bde0" containerID="64d70cd0bff7820266e6f136572d9871085c465f27ab268e32d2c07e554903f2" exitCode=0 Dec 12 16:28:47 crc kubenswrapper[4693]: I1212 16:28:47.840543 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-sqkws" event={"ID":"d9276ff6-eeab-4323-9ddd-067707c7bde0","Type":"ContainerDied","Data":"64d70cd0bff7820266e6f136572d9871085c465f27ab268e32d2c07e554903f2"} Dec 12 16:28:48 crc kubenswrapper[4693]: I1212 16:28:48.358461 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:28:48 crc kubenswrapper[4693]: E1212 16:28:48.358697 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:28:49 crc kubenswrapper[4693]: I1212 16:28:49.537008 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-sqkws" Dec 12 16:28:49 crc kubenswrapper[4693]: I1212 16:28:49.599531 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9276ff6-eeab-4323-9ddd-067707c7bde0-ssh-key\") pod \"d9276ff6-eeab-4323-9ddd-067707c7bde0\" (UID: \"d9276ff6-eeab-4323-9ddd-067707c7bde0\") " Dec 12 16:28:49 crc kubenswrapper[4693]: I1212 16:28:49.599586 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9276ff6-eeab-4323-9ddd-067707c7bde0-inventory\") pod \"d9276ff6-eeab-4323-9ddd-067707c7bde0\" (UID: \"d9276ff6-eeab-4323-9ddd-067707c7bde0\") " Dec 12 16:28:49 crc kubenswrapper[4693]: I1212 16:28:49.599646 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p45sb\" (UniqueName: \"kubernetes.io/projected/d9276ff6-eeab-4323-9ddd-067707c7bde0-kube-api-access-p45sb\") pod \"d9276ff6-eeab-4323-9ddd-067707c7bde0\" (UID: \"d9276ff6-eeab-4323-9ddd-067707c7bde0\") " Dec 12 16:28:49 crc kubenswrapper[4693]: I1212 16:28:49.605846 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9276ff6-eeab-4323-9ddd-067707c7bde0-kube-api-access-p45sb" (OuterVolumeSpecName: "kube-api-access-p45sb") pod "d9276ff6-eeab-4323-9ddd-067707c7bde0" (UID: "d9276ff6-eeab-4323-9ddd-067707c7bde0"). InnerVolumeSpecName "kube-api-access-p45sb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:28:49 crc kubenswrapper[4693]: I1212 16:28:49.640479 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9276ff6-eeab-4323-9ddd-067707c7bde0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d9276ff6-eeab-4323-9ddd-067707c7bde0" (UID: "d9276ff6-eeab-4323-9ddd-067707c7bde0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:28:49 crc kubenswrapper[4693]: I1212 16:28:49.642606 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9276ff6-eeab-4323-9ddd-067707c7bde0-inventory" (OuterVolumeSpecName: "inventory") pod "d9276ff6-eeab-4323-9ddd-067707c7bde0" (UID: "d9276ff6-eeab-4323-9ddd-067707c7bde0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:28:49 crc kubenswrapper[4693]: I1212 16:28:49.702932 4693 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9276ff6-eeab-4323-9ddd-067707c7bde0-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:49 crc kubenswrapper[4693]: I1212 16:28:49.702970 4693 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9276ff6-eeab-4323-9ddd-067707c7bde0-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:49 crc kubenswrapper[4693]: I1212 16:28:49.702981 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p45sb\" (UniqueName: \"kubernetes.io/projected/d9276ff6-eeab-4323-9ddd-067707c7bde0-kube-api-access-p45sb\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:49 crc kubenswrapper[4693]: I1212 16:28:49.873600 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-sqkws" event={"ID":"d9276ff6-eeab-4323-9ddd-067707c7bde0","Type":"ContainerDied","Data":"4acf86fa8624638488f47f7ca94a93fc04befcec6ad4422042c328c4ba99955a"} Dec 12 16:28:49 crc kubenswrapper[4693]: I1212 16:28:49.873644 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4acf86fa8624638488f47f7ca94a93fc04befcec6ad4422042c328c4ba99955a" Dec 12 16:28:49 crc kubenswrapper[4693]: I1212 16:28:49.873803 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-sqkws" Dec 12 16:28:49 crc kubenswrapper[4693]: I1212 16:28:49.979883 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz"] Dec 12 16:28:49 crc kubenswrapper[4693]: E1212 16:28:49.980738 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9276ff6-eeab-4323-9ddd-067707c7bde0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 12 16:28:49 crc kubenswrapper[4693]: I1212 16:28:49.980767 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9276ff6-eeab-4323-9ddd-067707c7bde0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 12 16:28:49 crc kubenswrapper[4693]: I1212 16:28:49.981110 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9276ff6-eeab-4323-9ddd-067707c7bde0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 12 16:28:49 crc kubenswrapper[4693]: I1212 16:28:49.982501 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz" Dec 12 16:28:49 crc kubenswrapper[4693]: I1212 16:28:49.984553 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlgf7" Dec 12 16:28:49 crc kubenswrapper[4693]: I1212 16:28:49.992947 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 16:28:49 crc kubenswrapper[4693]: I1212 16:28:49.993265 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 16:28:49 crc kubenswrapper[4693]: I1212 16:28:49.993544 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 16:28:50 crc kubenswrapper[4693]: I1212 16:28:50.034531 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz"] Dec 12 16:28:50 crc kubenswrapper[4693]: I1212 16:28:50.124074 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv4dc\" (UniqueName: \"kubernetes.io/projected/8ca04c35-c58c-4df8-bd3f-c73cee1a28e2-kube-api-access-pv4dc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz\" (UID: \"8ca04c35-c58c-4df8-bd3f-c73cee1a28e2\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz" Dec 12 16:28:50 crc kubenswrapper[4693]: I1212 16:28:50.124166 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ca04c35-c58c-4df8-bd3f-c73cee1a28e2-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz\" (UID: \"8ca04c35-c58c-4df8-bd3f-c73cee1a28e2\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz" Dec 12 16:28:50 crc kubenswrapper[4693]: I1212 16:28:50.124226 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8ca04c35-c58c-4df8-bd3f-c73cee1a28e2-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz\" (UID: \"8ca04c35-c58c-4df8-bd3f-c73cee1a28e2\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz" Dec 12 16:28:50 crc kubenswrapper[4693]: I1212 16:28:50.226603 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv4dc\" (UniqueName: \"kubernetes.io/projected/8ca04c35-c58c-4df8-bd3f-c73cee1a28e2-kube-api-access-pv4dc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz\" (UID: \"8ca04c35-c58c-4df8-bd3f-c73cee1a28e2\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz" Dec 12 16:28:50 crc kubenswrapper[4693]: I1212 16:28:50.226678 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ca04c35-c58c-4df8-bd3f-c73cee1a28e2-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz\" (UID: \"8ca04c35-c58c-4df8-bd3f-c73cee1a28e2\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz" Dec 12 16:28:50 crc kubenswrapper[4693]: I1212 16:28:50.226722 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8ca04c35-c58c-4df8-bd3f-c73cee1a28e2-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz\" (UID: \"8ca04c35-c58c-4df8-bd3f-c73cee1a28e2\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz" Dec 12 16:28:50 crc kubenswrapper[4693]: I1212 16:28:50.231469 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8ca04c35-c58c-4df8-bd3f-c73cee1a28e2-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz\" (UID: \"8ca04c35-c58c-4df8-bd3f-c73cee1a28e2\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz" Dec 12 16:28:50 crc kubenswrapper[4693]: I1212 16:28:50.231563 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ca04c35-c58c-4df8-bd3f-c73cee1a28e2-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz\" (UID: \"8ca04c35-c58c-4df8-bd3f-c73cee1a28e2\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz" Dec 12 16:28:50 crc kubenswrapper[4693]: I1212 16:28:50.262614 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv4dc\" (UniqueName: \"kubernetes.io/projected/8ca04c35-c58c-4df8-bd3f-c73cee1a28e2-kube-api-access-pv4dc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz\" (UID: \"8ca04c35-c58c-4df8-bd3f-c73cee1a28e2\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz" Dec 12 16:28:50 crc kubenswrapper[4693]: I1212 16:28:50.308397 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz" Dec 12 16:28:50 crc kubenswrapper[4693]: I1212 16:28:50.920360 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz"] Dec 12 16:28:51 crc kubenswrapper[4693]: I1212 16:28:51.905504 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz" event={"ID":"8ca04c35-c58c-4df8-bd3f-c73cee1a28e2","Type":"ContainerStarted","Data":"2c6fa76b051eaf6c3a5dd5e1497c4c55a33028d1d2988fab51e64fc592a9b4fc"} Dec 12 16:28:52 crc kubenswrapper[4693]: I1212 16:28:52.918928 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz" event={"ID":"8ca04c35-c58c-4df8-bd3f-c73cee1a28e2","Type":"ContainerStarted","Data":"5c575e619d38aed300bdbe50ccbe1cf05f1bf1a761ce1b950061b0648797a0a4"} Dec 12 16:28:52 crc kubenswrapper[4693]: I1212 16:28:52.951159 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz" podStartSLOduration=3.136538837 podStartE2EDuration="3.951137367s" podCreationTimestamp="2025-12-12 16:28:49 +0000 UTC" firstStartedPulling="2025-12-12 16:28:50.926949517 +0000 UTC m=+2558.095589118" lastFinishedPulling="2025-12-12 16:28:51.741548047 +0000 UTC m=+2558.910187648" observedRunningTime="2025-12-12 16:28:52.938168507 +0000 UTC m=+2560.106808128" watchObservedRunningTime="2025-12-12 16:28:52.951137367 +0000 UTC m=+2560.119776978" Dec 12 16:29:00 crc kubenswrapper[4693]: I1212 16:29:00.203766 4693 scope.go:117] "RemoveContainer" containerID="9efafdf7433ee5f1ca6b75bc210f67349756dec9285ecd1637a965b3b167608c" Dec 12 16:29:00 crc kubenswrapper[4693]: I1212 16:29:00.357254 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:29:00 crc kubenswrapper[4693]: E1212 16:29:00.357643 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:29:04 crc kubenswrapper[4693]: I1212 16:29:04.072749 4693 generic.go:334] "Generic (PLEG): container finished" podID="8ca04c35-c58c-4df8-bd3f-c73cee1a28e2" containerID="5c575e619d38aed300bdbe50ccbe1cf05f1bf1a761ce1b950061b0648797a0a4" exitCode=0 Dec 12 16:29:04 crc kubenswrapper[4693]: I1212 16:29:04.073457 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz" event={"ID":"8ca04c35-c58c-4df8-bd3f-c73cee1a28e2","Type":"ContainerDied","Data":"5c575e619d38aed300bdbe50ccbe1cf05f1bf1a761ce1b950061b0648797a0a4"} Dec 12 16:29:05 crc kubenswrapper[4693]: I1212 16:29:05.544687 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz" Dec 12 16:29:05 crc kubenswrapper[4693]: I1212 16:29:05.615316 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8ca04c35-c58c-4df8-bd3f-c73cee1a28e2-ssh-key\") pod \"8ca04c35-c58c-4df8-bd3f-c73cee1a28e2\" (UID: \"8ca04c35-c58c-4df8-bd3f-c73cee1a28e2\") " Dec 12 16:29:05 crc kubenswrapper[4693]: I1212 16:29:05.615610 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ca04c35-c58c-4df8-bd3f-c73cee1a28e2-inventory\") pod \"8ca04c35-c58c-4df8-bd3f-c73cee1a28e2\" (UID: \"8ca04c35-c58c-4df8-bd3f-c73cee1a28e2\") " Dec 12 16:29:05 crc kubenswrapper[4693]: I1212 16:29:05.615696 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv4dc\" (UniqueName: \"kubernetes.io/projected/8ca04c35-c58c-4df8-bd3f-c73cee1a28e2-kube-api-access-pv4dc\") pod \"8ca04c35-c58c-4df8-bd3f-c73cee1a28e2\" (UID: \"8ca04c35-c58c-4df8-bd3f-c73cee1a28e2\") " Dec 12 16:29:05 crc kubenswrapper[4693]: I1212 16:29:05.623978 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ca04c35-c58c-4df8-bd3f-c73cee1a28e2-kube-api-access-pv4dc" (OuterVolumeSpecName: "kube-api-access-pv4dc") pod "8ca04c35-c58c-4df8-bd3f-c73cee1a28e2" (UID: "8ca04c35-c58c-4df8-bd3f-c73cee1a28e2"). InnerVolumeSpecName "kube-api-access-pv4dc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:29:05 crc kubenswrapper[4693]: I1212 16:29:05.648441 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca04c35-c58c-4df8-bd3f-c73cee1a28e2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8ca04c35-c58c-4df8-bd3f-c73cee1a28e2" (UID: "8ca04c35-c58c-4df8-bd3f-c73cee1a28e2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:29:05 crc kubenswrapper[4693]: I1212 16:29:05.662401 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca04c35-c58c-4df8-bd3f-c73cee1a28e2-inventory" (OuterVolumeSpecName: "inventory") pod "8ca04c35-c58c-4df8-bd3f-c73cee1a28e2" (UID: "8ca04c35-c58c-4df8-bd3f-c73cee1a28e2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:29:05 crc kubenswrapper[4693]: I1212 16:29:05.719966 4693 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8ca04c35-c58c-4df8-bd3f-c73cee1a28e2-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:05 crc kubenswrapper[4693]: I1212 16:29:05.720572 4693 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ca04c35-c58c-4df8-bd3f-c73cee1a28e2-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:05 crc kubenswrapper[4693]: I1212 16:29:05.720684 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv4dc\" (UniqueName: \"kubernetes.io/projected/8ca04c35-c58c-4df8-bd3f-c73cee1a28e2-kube-api-access-pv4dc\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.098461 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz" event={"ID":"8ca04c35-c58c-4df8-bd3f-c73cee1a28e2","Type":"ContainerDied","Data":"2c6fa76b051eaf6c3a5dd5e1497c4c55a33028d1d2988fab51e64fc592a9b4fc"} Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.098493 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hmwz" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.098499 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c6fa76b051eaf6c3a5dd5e1497c4c55a33028d1d2988fab51e64fc592a9b4fc" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.199849 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k"] Dec 12 16:29:06 crc kubenswrapper[4693]: E1212 16:29:06.200680 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ca04c35-c58c-4df8-bd3f-c73cee1a28e2" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.200712 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ca04c35-c58c-4df8-bd3f-c73cee1a28e2" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.201124 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ca04c35-c58c-4df8-bd3f-c73cee1a28e2" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.202498 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.206192 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.206698 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.207266 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.207533 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.207645 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.207743 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.207543 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.207793 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.210022 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlgf7" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.217645 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k"] Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.336972 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.337035 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.337072 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.337116 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.337326 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2wsl\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-kube-api-access-b2wsl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.337585 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.337786 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.337856 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.338146 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.338358 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.338508 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.338771 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.338945 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.339245 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.339357 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.339401 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.441871 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.441934 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.441964 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.441995 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.442013 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2wsl\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-kube-api-access-b2wsl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.442052 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.442075 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.442092 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.442143 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.442185 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.442213 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.442259 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.442300 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.442374 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.442398 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.442417 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.446777 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.447259 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.448722 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.448793 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.449363 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.451074 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.458829 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.459658 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.459677 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.459984 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.460036 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.460117 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.460847 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.460874 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.464025 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2wsl\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-kube-api-access-b2wsl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.465561 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:06 crc kubenswrapper[4693]: I1212 16:29:06.521512 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:07 crc kubenswrapper[4693]: W1212 16:29:07.161611 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6acc5ad2_f3fb_4b36_95c1_83c4a5ab2eae.slice/crio-47354b655805322a010493152c7f0f4b1b90ff35a52f6653130af0eb54bb8b2a WatchSource:0}: Error finding container 47354b655805322a010493152c7f0f4b1b90ff35a52f6653130af0eb54bb8b2a: Status 404 returned error can't find the container with id 47354b655805322a010493152c7f0f4b1b90ff35a52f6653130af0eb54bb8b2a Dec 12 16:29:07 crc kubenswrapper[4693]: I1212 16:29:07.171866 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k"] Dec 12 16:29:08 crc kubenswrapper[4693]: I1212 16:29:08.120436 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" event={"ID":"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae","Type":"ContainerStarted","Data":"47354b655805322a010493152c7f0f4b1b90ff35a52f6653130af0eb54bb8b2a"} Dec 12 16:29:10 crc kubenswrapper[4693]: I1212 16:29:10.147046 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" event={"ID":"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae","Type":"ContainerStarted","Data":"9619a8e8645c908dd962f0235c717aebd1e505e4932c11cae1a420fc80139d8f"} Dec 12 16:29:10 crc kubenswrapper[4693]: I1212 16:29:10.180006 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" podStartSLOduration=2.283386974 podStartE2EDuration="4.1799848s" podCreationTimestamp="2025-12-12 16:29:06 +0000 UTC" firstStartedPulling="2025-12-12 16:29:07.186136928 +0000 UTC m=+2574.354776529" lastFinishedPulling="2025-12-12 16:29:09.082734744 +0000 UTC m=+2576.251374355" observedRunningTime="2025-12-12 16:29:10.176912907 +0000 UTC m=+2577.345552558" watchObservedRunningTime="2025-12-12 16:29:10.1799848 +0000 UTC m=+2577.348624401" Dec 12 16:29:12 crc kubenswrapper[4693]: I1212 16:29:12.357601 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:29:12 crc kubenswrapper[4693]: E1212 16:29:12.358205 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:29:27 crc kubenswrapper[4693]: I1212 16:29:27.357222 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:29:27 crc kubenswrapper[4693]: E1212 16:29:27.358024 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:29:42 crc kubenswrapper[4693]: I1212 16:29:42.357811 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:29:42 crc kubenswrapper[4693]: E1212 16:29:42.358927 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:29:54 crc kubenswrapper[4693]: I1212 16:29:54.357989 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:29:54 crc kubenswrapper[4693]: I1212 16:29:54.681906 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerStarted","Data":"1805e1e4e83dce627739eb17f054940726e4ffb2a25197f8e6f181e76311c752"} Dec 12 16:29:57 crc kubenswrapper[4693]: I1212 16:29:57.735386 4693 generic.go:334] "Generic (PLEG): container finished" podID="6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae" containerID="9619a8e8645c908dd962f0235c717aebd1e505e4932c11cae1a420fc80139d8f" exitCode=0 Dec 12 16:29:57 crc kubenswrapper[4693]: I1212 16:29:57.735497 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" event={"ID":"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae","Type":"ContainerDied","Data":"9619a8e8645c908dd962f0235c717aebd1e505e4932c11cae1a420fc80139d8f"} Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.293120 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.385683 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-repo-setup-combined-ca-bundle\") pod \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.386006 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-libvirt-combined-ca-bundle\") pod \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.386070 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-ssh-key\") pod \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.386123 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2wsl\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-kube-api-access-b2wsl\") pod \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.386152 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-ovn-combined-ca-bundle\") pod \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.386176 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.386243 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.386265 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-bootstrap-combined-ca-bundle\") pod \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.386306 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-inventory\") pod \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.386341 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-telemetry-power-monitoring-combined-ca-bundle\") pod \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.386375 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-nova-combined-ca-bundle\") pod \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.386438 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-neutron-metadata-combined-ca-bundle\") pod \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.386464 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-telemetry-combined-ca-bundle\") pod \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.386520 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-ovn-default-certs-0\") pod \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.386546 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.386569 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\" (UID: \"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae\") " Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.392850 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae" (UID: "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.393949 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae" (UID: "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.394252 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-kube-api-access-b2wsl" (OuterVolumeSpecName: "kube-api-access-b2wsl") pod "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae" (UID: "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae"). InnerVolumeSpecName "kube-api-access-b2wsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.394350 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae" (UID: "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.396235 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae" (UID: "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.396998 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae" (UID: "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.397020 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae" (UID: "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.397066 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae" (UID: "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.397656 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae" (UID: "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.399880 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae" (UID: "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.401390 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae" (UID: "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.401389 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae" (UID: "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.403182 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae" (UID: "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.413967 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae" (UID: "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.430367 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae" (UID: "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.443034 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-inventory" (OuterVolumeSpecName: "inventory") pod "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae" (UID: "6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.489719 4693 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.490043 4693 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.490177 4693 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.490287 4693 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.490386 4693 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.490469 4693 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.490585 4693 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.490715 4693 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.490901 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2wsl\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-kube-api-access-b2wsl\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.490996 4693 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.491064 4693 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.491128 4693 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.491196 4693 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.491260 4693 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.491666 4693 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.491795 4693 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.775372 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" event={"ID":"6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae","Type":"ContainerDied","Data":"47354b655805322a010493152c7f0f4b1b90ff35a52f6653130af0eb54bb8b2a"} Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.775758 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47354b655805322a010493152c7f0f4b1b90ff35a52f6653130af0eb54bb8b2a" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.775873 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9rj6k" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.866443 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5"] Dec 12 16:29:59 crc kubenswrapper[4693]: E1212 16:29:59.867111 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.867136 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.867456 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="6acc5ad2-f3fb-4b36-95c1-83c4a5ab2eae" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.868623 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.875673 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5"] Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.909956 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlgf7" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.910024 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.910472 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.914784 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 16:29:59 crc kubenswrapper[4693]: I1212 16:29:59.916296 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.011694 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w74c5\" (UID: \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.011926 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmmsx\" (UniqueName: \"kubernetes.io/projected/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-kube-api-access-fmmsx\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w74c5\" (UID: \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.012088 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w74c5\" (UID: \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.012151 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w74c5\" (UID: \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.012561 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w74c5\" (UID: \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.114844 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w74c5\" (UID: \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.114982 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmmsx\" (UniqueName: \"kubernetes.io/projected/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-kube-api-access-fmmsx\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w74c5\" (UID: \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.115072 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w74c5\" (UID: \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.115104 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w74c5\" (UID: \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.115159 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w74c5\" (UID: \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.116340 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w74c5\" (UID: \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.123170 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w74c5\" (UID: \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.123713 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w74c5\" (UID: \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.124127 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w74c5\" (UID: \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.148765 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmmsx\" (UniqueName: \"kubernetes.io/projected/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-kube-api-access-fmmsx\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w74c5\" (UID: \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.157396 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425950-gk77t"] Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.159452 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-gk77t" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.162208 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.162360 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.172830 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425950-gk77t"] Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.218161 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b042caa-be5f-4c07-9a47-2e434e76d777-secret-volume\") pod \"collect-profiles-29425950-gk77t\" (UID: \"4b042caa-be5f-4c07-9a47-2e434e76d777\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-gk77t" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.218330 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b042caa-be5f-4c07-9a47-2e434e76d777-config-volume\") pod \"collect-profiles-29425950-gk77t\" (UID: \"4b042caa-be5f-4c07-9a47-2e434e76d777\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-gk77t" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.218483 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pft6p\" (UniqueName: \"kubernetes.io/projected/4b042caa-be5f-4c07-9a47-2e434e76d777-kube-api-access-pft6p\") pod \"collect-profiles-29425950-gk77t\" (UID: \"4b042caa-be5f-4c07-9a47-2e434e76d777\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-gk77t" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.226500 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.322152 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b042caa-be5f-4c07-9a47-2e434e76d777-secret-volume\") pod \"collect-profiles-29425950-gk77t\" (UID: \"4b042caa-be5f-4c07-9a47-2e434e76d777\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-gk77t" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.322337 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b042caa-be5f-4c07-9a47-2e434e76d777-config-volume\") pod \"collect-profiles-29425950-gk77t\" (UID: \"4b042caa-be5f-4c07-9a47-2e434e76d777\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-gk77t" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.322538 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pft6p\" (UniqueName: \"kubernetes.io/projected/4b042caa-be5f-4c07-9a47-2e434e76d777-kube-api-access-pft6p\") pod \"collect-profiles-29425950-gk77t\" (UID: \"4b042caa-be5f-4c07-9a47-2e434e76d777\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-gk77t" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.323487 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b042caa-be5f-4c07-9a47-2e434e76d777-config-volume\") pod \"collect-profiles-29425950-gk77t\" (UID: \"4b042caa-be5f-4c07-9a47-2e434e76d777\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-gk77t" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.326683 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b042caa-be5f-4c07-9a47-2e434e76d777-secret-volume\") pod \"collect-profiles-29425950-gk77t\" (UID: \"4b042caa-be5f-4c07-9a47-2e434e76d777\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-gk77t" Dec 12 16:30:00 crc kubenswrapper[4693]: I1212 16:30:00.343153 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pft6p\" (UniqueName: \"kubernetes.io/projected/4b042caa-be5f-4c07-9a47-2e434e76d777-kube-api-access-pft6p\") pod \"collect-profiles-29425950-gk77t\" (UID: \"4b042caa-be5f-4c07-9a47-2e434e76d777\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-gk77t" Dec 12 16:30:01 crc kubenswrapper[4693]: I1212 16:30:00.464186 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-gk77t" Dec 12 16:30:01 crc kubenswrapper[4693]: I1212 16:30:01.849374 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425950-gk77t"] Dec 12 16:30:01 crc kubenswrapper[4693]: I1212 16:30:01.902957 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5"] Dec 12 16:30:01 crc kubenswrapper[4693]: W1212 16:30:01.907598 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecd78dd6_2f3a_4cb0_8966_10c8c680c998.slice/crio-a005392582418641a8f96860674a66099c785cc94157ed22fa61a2bd825ee57d WatchSource:0}: Error finding container a005392582418641a8f96860674a66099c785cc94157ed22fa61a2bd825ee57d: Status 404 returned error can't find the container with id a005392582418641a8f96860674a66099c785cc94157ed22fa61a2bd825ee57d Dec 12 16:30:02 crc kubenswrapper[4693]: I1212 16:30:02.811960 4693 generic.go:334] "Generic (PLEG): container finished" podID="4b042caa-be5f-4c07-9a47-2e434e76d777" containerID="5a7c42971a2e9583e866f36edc8b2c2b7f81f8a20d2c9c90160b8e937b5826a8" exitCode=0 Dec 12 16:30:02 crc kubenswrapper[4693]: I1212 16:30:02.812326 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-gk77t" event={"ID":"4b042caa-be5f-4c07-9a47-2e434e76d777","Type":"ContainerDied","Data":"5a7c42971a2e9583e866f36edc8b2c2b7f81f8a20d2c9c90160b8e937b5826a8"} Dec 12 16:30:02 crc kubenswrapper[4693]: I1212 16:30:02.812355 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-gk77t" event={"ID":"4b042caa-be5f-4c07-9a47-2e434e76d777","Type":"ContainerStarted","Data":"6811baa583864f02847b9291c37af976d0c7a7ed837623baa462b41baa844c34"} Dec 12 16:30:02 crc kubenswrapper[4693]: I1212 16:30:02.825340 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" event={"ID":"ecd78dd6-2f3a-4cb0-8966-10c8c680c998","Type":"ContainerStarted","Data":"a005392582418641a8f96860674a66099c785cc94157ed22fa61a2bd825ee57d"} Dec 12 16:30:03 crc kubenswrapper[4693]: I1212 16:30:03.838944 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" event={"ID":"ecd78dd6-2f3a-4cb0-8966-10c8c680c998","Type":"ContainerStarted","Data":"54f74095d8bc779acbda9d18c50f369f03e82674e5b923c91386dda562eda7f2"} Dec 12 16:30:03 crc kubenswrapper[4693]: I1212 16:30:03.868168 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" podStartSLOduration=3.962530589 podStartE2EDuration="4.86815151s" podCreationTimestamp="2025-12-12 16:29:59 +0000 UTC" firstStartedPulling="2025-12-12 16:30:01.910024792 +0000 UTC m=+2629.078664393" lastFinishedPulling="2025-12-12 16:30:02.815645713 +0000 UTC m=+2629.984285314" observedRunningTime="2025-12-12 16:30:03.858672844 +0000 UTC m=+2631.027312435" watchObservedRunningTime="2025-12-12 16:30:03.86815151 +0000 UTC m=+2631.036791111" Dec 12 16:30:04 crc kubenswrapper[4693]: I1212 16:30:04.329817 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-gk77t" Dec 12 16:30:04 crc kubenswrapper[4693]: I1212 16:30:04.452382 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b042caa-be5f-4c07-9a47-2e434e76d777-secret-volume\") pod \"4b042caa-be5f-4c07-9a47-2e434e76d777\" (UID: \"4b042caa-be5f-4c07-9a47-2e434e76d777\") " Dec 12 16:30:04 crc kubenswrapper[4693]: I1212 16:30:04.452659 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pft6p\" (UniqueName: \"kubernetes.io/projected/4b042caa-be5f-4c07-9a47-2e434e76d777-kube-api-access-pft6p\") pod \"4b042caa-be5f-4c07-9a47-2e434e76d777\" (UID: \"4b042caa-be5f-4c07-9a47-2e434e76d777\") " Dec 12 16:30:04 crc kubenswrapper[4693]: I1212 16:30:04.452811 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b042caa-be5f-4c07-9a47-2e434e76d777-config-volume\") pod \"4b042caa-be5f-4c07-9a47-2e434e76d777\" (UID: \"4b042caa-be5f-4c07-9a47-2e434e76d777\") " Dec 12 16:30:04 crc kubenswrapper[4693]: I1212 16:30:04.453479 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b042caa-be5f-4c07-9a47-2e434e76d777-config-volume" (OuterVolumeSpecName: "config-volume") pod "4b042caa-be5f-4c07-9a47-2e434e76d777" (UID: "4b042caa-be5f-4c07-9a47-2e434e76d777"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:30:04 crc kubenswrapper[4693]: I1212 16:30:04.453842 4693 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b042caa-be5f-4c07-9a47-2e434e76d777-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 16:30:04 crc kubenswrapper[4693]: I1212 16:30:04.458449 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b042caa-be5f-4c07-9a47-2e434e76d777-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4b042caa-be5f-4c07-9a47-2e434e76d777" (UID: "4b042caa-be5f-4c07-9a47-2e434e76d777"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:30:04 crc kubenswrapper[4693]: I1212 16:30:04.458557 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b042caa-be5f-4c07-9a47-2e434e76d777-kube-api-access-pft6p" (OuterVolumeSpecName: "kube-api-access-pft6p") pod "4b042caa-be5f-4c07-9a47-2e434e76d777" (UID: "4b042caa-be5f-4c07-9a47-2e434e76d777"). InnerVolumeSpecName "kube-api-access-pft6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:30:04 crc kubenswrapper[4693]: I1212 16:30:04.556501 4693 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b042caa-be5f-4c07-9a47-2e434e76d777-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 16:30:04 crc kubenswrapper[4693]: I1212 16:30:04.556546 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pft6p\" (UniqueName: \"kubernetes.io/projected/4b042caa-be5f-4c07-9a47-2e434e76d777-kube-api-access-pft6p\") on node \"crc\" DevicePath \"\"" Dec 12 16:30:04 crc kubenswrapper[4693]: I1212 16:30:04.867778 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-gk77t" Dec 12 16:30:04 crc kubenswrapper[4693]: I1212 16:30:04.868956 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-gk77t" event={"ID":"4b042caa-be5f-4c07-9a47-2e434e76d777","Type":"ContainerDied","Data":"6811baa583864f02847b9291c37af976d0c7a7ed837623baa462b41baa844c34"} Dec 12 16:30:04 crc kubenswrapper[4693]: I1212 16:30:04.872429 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6811baa583864f02847b9291c37af976d0c7a7ed837623baa462b41baa844c34" Dec 12 16:30:05 crc kubenswrapper[4693]: I1212 16:30:05.405019 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm"] Dec 12 16:30:05 crc kubenswrapper[4693]: I1212 16:30:05.415470 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425905-5fvqm"] Dec 12 16:30:07 crc kubenswrapper[4693]: I1212 16:30:07.375952 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f92dec3b-e25a-4f3f-a004-e85cc51093c5" path="/var/lib/kubelet/pods/f92dec3b-e25a-4f3f-a004-e85cc51093c5/volumes" Dec 12 16:30:47 crc kubenswrapper[4693]: I1212 16:30:47.850443 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2kclt"] Dec 12 16:30:47 crc kubenswrapper[4693]: E1212 16:30:47.851910 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b042caa-be5f-4c07-9a47-2e434e76d777" containerName="collect-profiles" Dec 12 16:30:47 crc kubenswrapper[4693]: I1212 16:30:47.851946 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b042caa-be5f-4c07-9a47-2e434e76d777" containerName="collect-profiles" Dec 12 16:30:47 crc kubenswrapper[4693]: I1212 16:30:47.852504 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b042caa-be5f-4c07-9a47-2e434e76d777" containerName="collect-profiles" Dec 12 16:30:47 crc kubenswrapper[4693]: I1212 16:30:47.856507 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kclt" Dec 12 16:30:47 crc kubenswrapper[4693]: I1212 16:30:47.881513 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2kclt"] Dec 12 16:30:47 crc kubenswrapper[4693]: I1212 16:30:47.907435 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15772132-7104-4b40-b3a2-c38c2058b269-catalog-content\") pod \"redhat-operators-2kclt\" (UID: \"15772132-7104-4b40-b3a2-c38c2058b269\") " pod="openshift-marketplace/redhat-operators-2kclt" Dec 12 16:30:47 crc kubenswrapper[4693]: I1212 16:30:47.907613 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15772132-7104-4b40-b3a2-c38c2058b269-utilities\") pod \"redhat-operators-2kclt\" (UID: \"15772132-7104-4b40-b3a2-c38c2058b269\") " pod="openshift-marketplace/redhat-operators-2kclt" Dec 12 16:30:47 crc kubenswrapper[4693]: I1212 16:30:47.907745 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bvv5\" (UniqueName: \"kubernetes.io/projected/15772132-7104-4b40-b3a2-c38c2058b269-kube-api-access-8bvv5\") pod \"redhat-operators-2kclt\" (UID: \"15772132-7104-4b40-b3a2-c38c2058b269\") " pod="openshift-marketplace/redhat-operators-2kclt" Dec 12 16:30:48 crc kubenswrapper[4693]: I1212 16:30:48.009455 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15772132-7104-4b40-b3a2-c38c2058b269-utilities\") pod \"redhat-operators-2kclt\" (UID: \"15772132-7104-4b40-b3a2-c38c2058b269\") " pod="openshift-marketplace/redhat-operators-2kclt" Dec 12 16:30:48 crc kubenswrapper[4693]: I1212 16:30:48.009547 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bvv5\" (UniqueName: \"kubernetes.io/projected/15772132-7104-4b40-b3a2-c38c2058b269-kube-api-access-8bvv5\") pod \"redhat-operators-2kclt\" (UID: \"15772132-7104-4b40-b3a2-c38c2058b269\") " pod="openshift-marketplace/redhat-operators-2kclt" Dec 12 16:30:48 crc kubenswrapper[4693]: I1212 16:30:48.009640 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15772132-7104-4b40-b3a2-c38c2058b269-catalog-content\") pod \"redhat-operators-2kclt\" (UID: \"15772132-7104-4b40-b3a2-c38c2058b269\") " pod="openshift-marketplace/redhat-operators-2kclt" Dec 12 16:30:48 crc kubenswrapper[4693]: I1212 16:30:48.009900 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15772132-7104-4b40-b3a2-c38c2058b269-utilities\") pod \"redhat-operators-2kclt\" (UID: \"15772132-7104-4b40-b3a2-c38c2058b269\") " pod="openshift-marketplace/redhat-operators-2kclt" Dec 12 16:30:48 crc kubenswrapper[4693]: I1212 16:30:48.009933 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15772132-7104-4b40-b3a2-c38c2058b269-catalog-content\") pod \"redhat-operators-2kclt\" (UID: \"15772132-7104-4b40-b3a2-c38c2058b269\") " pod="openshift-marketplace/redhat-operators-2kclt" Dec 12 16:30:48 crc kubenswrapper[4693]: I1212 16:30:48.032208 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bvv5\" (UniqueName: \"kubernetes.io/projected/15772132-7104-4b40-b3a2-c38c2058b269-kube-api-access-8bvv5\") pod \"redhat-operators-2kclt\" (UID: \"15772132-7104-4b40-b3a2-c38c2058b269\") " pod="openshift-marketplace/redhat-operators-2kclt" Dec 12 16:30:48 crc kubenswrapper[4693]: I1212 16:30:48.194766 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kclt" Dec 12 16:30:48 crc kubenswrapper[4693]: I1212 16:30:48.664241 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2kclt"] Dec 12 16:30:49 crc kubenswrapper[4693]: I1212 16:30:49.443648 4693 generic.go:334] "Generic (PLEG): container finished" podID="15772132-7104-4b40-b3a2-c38c2058b269" containerID="d8032966571af3aa2c9acb972fe332363126533efbdec5db7781fb96c70d6df0" exitCode=0 Dec 12 16:30:49 crc kubenswrapper[4693]: I1212 16:30:49.443797 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kclt" event={"ID":"15772132-7104-4b40-b3a2-c38c2058b269","Type":"ContainerDied","Data":"d8032966571af3aa2c9acb972fe332363126533efbdec5db7781fb96c70d6df0"} Dec 12 16:30:49 crc kubenswrapper[4693]: I1212 16:30:49.444110 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kclt" event={"ID":"15772132-7104-4b40-b3a2-c38c2058b269","Type":"ContainerStarted","Data":"256fabe76a2558e63fe27812542750d16c09b08ce0852a136cf6077ce07bcf48"} Dec 12 16:30:50 crc kubenswrapper[4693]: I1212 16:30:50.463884 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kclt" event={"ID":"15772132-7104-4b40-b3a2-c38c2058b269","Type":"ContainerStarted","Data":"de8922ddf73abcd107cc86d276c0d49fa34a4fe29d183ec52dee9ad6f0557b9c"} Dec 12 16:30:54 crc kubenswrapper[4693]: I1212 16:30:54.506045 4693 generic.go:334] "Generic (PLEG): container finished" podID="15772132-7104-4b40-b3a2-c38c2058b269" containerID="de8922ddf73abcd107cc86d276c0d49fa34a4fe29d183ec52dee9ad6f0557b9c" exitCode=0 Dec 12 16:30:54 crc kubenswrapper[4693]: I1212 16:30:54.506104 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kclt" event={"ID":"15772132-7104-4b40-b3a2-c38c2058b269","Type":"ContainerDied","Data":"de8922ddf73abcd107cc86d276c0d49fa34a4fe29d183ec52dee9ad6f0557b9c"} Dec 12 16:30:56 crc kubenswrapper[4693]: I1212 16:30:56.529149 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kclt" event={"ID":"15772132-7104-4b40-b3a2-c38c2058b269","Type":"ContainerStarted","Data":"67839ce3b51c170fe1d5d3b79e12bb1b42c96d6ddfb88b27b495ef956b5c3c09"} Dec 12 16:30:56 crc kubenswrapper[4693]: I1212 16:30:56.549322 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2kclt" podStartSLOduration=3.338416454 podStartE2EDuration="9.549306967s" podCreationTimestamp="2025-12-12 16:30:47 +0000 UTC" firstStartedPulling="2025-12-12 16:30:49.446082913 +0000 UTC m=+2676.614722514" lastFinishedPulling="2025-12-12 16:30:55.656973416 +0000 UTC m=+2682.825613027" observedRunningTime="2025-12-12 16:30:56.545226227 +0000 UTC m=+2683.713865828" watchObservedRunningTime="2025-12-12 16:30:56.549306967 +0000 UTC m=+2683.717946568" Dec 12 16:30:58 crc kubenswrapper[4693]: I1212 16:30:58.195238 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2kclt" Dec 12 16:30:58 crc kubenswrapper[4693]: I1212 16:30:58.195625 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2kclt" Dec 12 16:30:59 crc kubenswrapper[4693]: I1212 16:30:59.257245 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2kclt" podUID="15772132-7104-4b40-b3a2-c38c2058b269" containerName="registry-server" probeResult="failure" output=< Dec 12 16:30:59 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 16:30:59 crc kubenswrapper[4693]: > Dec 12 16:31:00 crc kubenswrapper[4693]: I1212 16:31:00.307110 4693 scope.go:117] "RemoveContainer" containerID="2ad4ffa31387f2ba9710686274b3a2b69a805fa8b2da63f2e2e2678012440018" Dec 12 16:31:08 crc kubenswrapper[4693]: I1212 16:31:08.247102 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2kclt" Dec 12 16:31:08 crc kubenswrapper[4693]: I1212 16:31:08.298623 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2kclt" Dec 12 16:31:08 crc kubenswrapper[4693]: I1212 16:31:08.482495 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2kclt"] Dec 12 16:31:09 crc kubenswrapper[4693]: I1212 16:31:09.683975 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2kclt" podUID="15772132-7104-4b40-b3a2-c38c2058b269" containerName="registry-server" containerID="cri-o://67839ce3b51c170fe1d5d3b79e12bb1b42c96d6ddfb88b27b495ef956b5c3c09" gracePeriod=2 Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.217599 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kclt" Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.369013 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15772132-7104-4b40-b3a2-c38c2058b269-catalog-content\") pod \"15772132-7104-4b40-b3a2-c38c2058b269\" (UID: \"15772132-7104-4b40-b3a2-c38c2058b269\") " Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.369085 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bvv5\" (UniqueName: \"kubernetes.io/projected/15772132-7104-4b40-b3a2-c38c2058b269-kube-api-access-8bvv5\") pod \"15772132-7104-4b40-b3a2-c38c2058b269\" (UID: \"15772132-7104-4b40-b3a2-c38c2058b269\") " Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.369193 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15772132-7104-4b40-b3a2-c38c2058b269-utilities\") pod \"15772132-7104-4b40-b3a2-c38c2058b269\" (UID: \"15772132-7104-4b40-b3a2-c38c2058b269\") " Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.370427 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15772132-7104-4b40-b3a2-c38c2058b269-utilities" (OuterVolumeSpecName: "utilities") pod "15772132-7104-4b40-b3a2-c38c2058b269" (UID: "15772132-7104-4b40-b3a2-c38c2058b269"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.371312 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15772132-7104-4b40-b3a2-c38c2058b269-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.375216 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15772132-7104-4b40-b3a2-c38c2058b269-kube-api-access-8bvv5" (OuterVolumeSpecName: "kube-api-access-8bvv5") pod "15772132-7104-4b40-b3a2-c38c2058b269" (UID: "15772132-7104-4b40-b3a2-c38c2058b269"). InnerVolumeSpecName "kube-api-access-8bvv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.474551 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bvv5\" (UniqueName: \"kubernetes.io/projected/15772132-7104-4b40-b3a2-c38c2058b269-kube-api-access-8bvv5\") on node \"crc\" DevicePath \"\"" Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.526557 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15772132-7104-4b40-b3a2-c38c2058b269-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15772132-7104-4b40-b3a2-c38c2058b269" (UID: "15772132-7104-4b40-b3a2-c38c2058b269"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.576531 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15772132-7104-4b40-b3a2-c38c2058b269-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.695599 4693 generic.go:334] "Generic (PLEG): container finished" podID="15772132-7104-4b40-b3a2-c38c2058b269" containerID="67839ce3b51c170fe1d5d3b79e12bb1b42c96d6ddfb88b27b495ef956b5c3c09" exitCode=0 Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.695684 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kclt" event={"ID":"15772132-7104-4b40-b3a2-c38c2058b269","Type":"ContainerDied","Data":"67839ce3b51c170fe1d5d3b79e12bb1b42c96d6ddfb88b27b495ef956b5c3c09"} Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.695720 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kclt" event={"ID":"15772132-7104-4b40-b3a2-c38c2058b269","Type":"ContainerDied","Data":"256fabe76a2558e63fe27812542750d16c09b08ce0852a136cf6077ce07bcf48"} Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.695739 4693 scope.go:117] "RemoveContainer" containerID="67839ce3b51c170fe1d5d3b79e12bb1b42c96d6ddfb88b27b495ef956b5c3c09" Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.695695 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kclt" Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.731041 4693 scope.go:117] "RemoveContainer" containerID="de8922ddf73abcd107cc86d276c0d49fa34a4fe29d183ec52dee9ad6f0557b9c" Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.736130 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2kclt"] Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.746508 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2kclt"] Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.767870 4693 scope.go:117] "RemoveContainer" containerID="d8032966571af3aa2c9acb972fe332363126533efbdec5db7781fb96c70d6df0" Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.812236 4693 scope.go:117] "RemoveContainer" containerID="67839ce3b51c170fe1d5d3b79e12bb1b42c96d6ddfb88b27b495ef956b5c3c09" Dec 12 16:31:10 crc kubenswrapper[4693]: E1212 16:31:10.812854 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67839ce3b51c170fe1d5d3b79e12bb1b42c96d6ddfb88b27b495ef956b5c3c09\": container with ID starting with 67839ce3b51c170fe1d5d3b79e12bb1b42c96d6ddfb88b27b495ef956b5c3c09 not found: ID does not exist" containerID="67839ce3b51c170fe1d5d3b79e12bb1b42c96d6ddfb88b27b495ef956b5c3c09" Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.812893 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67839ce3b51c170fe1d5d3b79e12bb1b42c96d6ddfb88b27b495ef956b5c3c09"} err="failed to get container status \"67839ce3b51c170fe1d5d3b79e12bb1b42c96d6ddfb88b27b495ef956b5c3c09\": rpc error: code = NotFound desc = could not find container \"67839ce3b51c170fe1d5d3b79e12bb1b42c96d6ddfb88b27b495ef956b5c3c09\": container with ID starting with 67839ce3b51c170fe1d5d3b79e12bb1b42c96d6ddfb88b27b495ef956b5c3c09 not found: ID does not exist" Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.812922 4693 scope.go:117] "RemoveContainer" containerID="de8922ddf73abcd107cc86d276c0d49fa34a4fe29d183ec52dee9ad6f0557b9c" Dec 12 16:31:10 crc kubenswrapper[4693]: E1212 16:31:10.813216 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de8922ddf73abcd107cc86d276c0d49fa34a4fe29d183ec52dee9ad6f0557b9c\": container with ID starting with de8922ddf73abcd107cc86d276c0d49fa34a4fe29d183ec52dee9ad6f0557b9c not found: ID does not exist" containerID="de8922ddf73abcd107cc86d276c0d49fa34a4fe29d183ec52dee9ad6f0557b9c" Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.813246 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de8922ddf73abcd107cc86d276c0d49fa34a4fe29d183ec52dee9ad6f0557b9c"} err="failed to get container status \"de8922ddf73abcd107cc86d276c0d49fa34a4fe29d183ec52dee9ad6f0557b9c\": rpc error: code = NotFound desc = could not find container \"de8922ddf73abcd107cc86d276c0d49fa34a4fe29d183ec52dee9ad6f0557b9c\": container with ID starting with de8922ddf73abcd107cc86d276c0d49fa34a4fe29d183ec52dee9ad6f0557b9c not found: ID does not exist" Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.813265 4693 scope.go:117] "RemoveContainer" containerID="d8032966571af3aa2c9acb972fe332363126533efbdec5db7781fb96c70d6df0" Dec 12 16:31:10 crc kubenswrapper[4693]: E1212 16:31:10.813687 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8032966571af3aa2c9acb972fe332363126533efbdec5db7781fb96c70d6df0\": container with ID starting with d8032966571af3aa2c9acb972fe332363126533efbdec5db7781fb96c70d6df0 not found: ID does not exist" containerID="d8032966571af3aa2c9acb972fe332363126533efbdec5db7781fb96c70d6df0" Dec 12 16:31:10 crc kubenswrapper[4693]: I1212 16:31:10.813774 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8032966571af3aa2c9acb972fe332363126533efbdec5db7781fb96c70d6df0"} err="failed to get container status \"d8032966571af3aa2c9acb972fe332363126533efbdec5db7781fb96c70d6df0\": rpc error: code = NotFound desc = could not find container \"d8032966571af3aa2c9acb972fe332363126533efbdec5db7781fb96c70d6df0\": container with ID starting with d8032966571af3aa2c9acb972fe332363126533efbdec5db7781fb96c70d6df0 not found: ID does not exist" Dec 12 16:31:11 crc kubenswrapper[4693]: I1212 16:31:11.371149 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15772132-7104-4b40-b3a2-c38c2058b269" path="/var/lib/kubelet/pods/15772132-7104-4b40-b3a2-c38c2058b269/volumes" Dec 12 16:31:13 crc kubenswrapper[4693]: I1212 16:31:13.751724 4693 generic.go:334] "Generic (PLEG): container finished" podID="ecd78dd6-2f3a-4cb0-8966-10c8c680c998" containerID="54f74095d8bc779acbda9d18c50f369f03e82674e5b923c91386dda562eda7f2" exitCode=0 Dec 12 16:31:13 crc kubenswrapper[4693]: I1212 16:31:13.752512 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" event={"ID":"ecd78dd6-2f3a-4cb0-8966-10c8c680c998","Type":"ContainerDied","Data":"54f74095d8bc779acbda9d18c50f369f03e82674e5b923c91386dda562eda7f2"} Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.247096 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.297488 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-ovncontroller-config-0\") pod \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\" (UID: \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\") " Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.297654 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-ssh-key\") pod \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\" (UID: \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\") " Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.297763 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-ovn-combined-ca-bundle\") pod \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\" (UID: \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\") " Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.297793 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmmsx\" (UniqueName: \"kubernetes.io/projected/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-kube-api-access-fmmsx\") pod \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\" (UID: \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\") " Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.297880 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-inventory\") pod \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\" (UID: \"ecd78dd6-2f3a-4cb0-8966-10c8c680c998\") " Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.325431 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ecd78dd6-2f3a-4cb0-8966-10c8c680c998" (UID: "ecd78dd6-2f3a-4cb0-8966-10c8c680c998"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.325472 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-kube-api-access-fmmsx" (OuterVolumeSpecName: "kube-api-access-fmmsx") pod "ecd78dd6-2f3a-4cb0-8966-10c8c680c998" (UID: "ecd78dd6-2f3a-4cb0-8966-10c8c680c998"). InnerVolumeSpecName "kube-api-access-fmmsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.340393 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "ecd78dd6-2f3a-4cb0-8966-10c8c680c998" (UID: "ecd78dd6-2f3a-4cb0-8966-10c8c680c998"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.349716 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ecd78dd6-2f3a-4cb0-8966-10c8c680c998" (UID: "ecd78dd6-2f3a-4cb0-8966-10c8c680c998"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.382634 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-inventory" (OuterVolumeSpecName: "inventory") pod "ecd78dd6-2f3a-4cb0-8966-10c8c680c998" (UID: "ecd78dd6-2f3a-4cb0-8966-10c8c680c998"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.400442 4693 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.400479 4693 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.400499 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmmsx\" (UniqueName: \"kubernetes.io/projected/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-kube-api-access-fmmsx\") on node \"crc\" DevicePath \"\"" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.400536 4693 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.400549 4693 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ecd78dd6-2f3a-4cb0-8966-10c8c680c998-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.782061 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" event={"ID":"ecd78dd6-2f3a-4cb0-8966-10c8c680c998","Type":"ContainerDied","Data":"a005392582418641a8f96860674a66099c785cc94157ed22fa61a2bd825ee57d"} Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.782110 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a005392582418641a8f96860674a66099c785cc94157ed22fa61a2bd825ee57d" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.782110 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w74c5" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.942492 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw"] Dec 12 16:31:15 crc kubenswrapper[4693]: E1212 16:31:15.943638 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15772132-7104-4b40-b3a2-c38c2058b269" containerName="extract-utilities" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.943742 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="15772132-7104-4b40-b3a2-c38c2058b269" containerName="extract-utilities" Dec 12 16:31:15 crc kubenswrapper[4693]: E1212 16:31:15.943867 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecd78dd6-2f3a-4cb0-8966-10c8c680c998" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.943932 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecd78dd6-2f3a-4cb0-8966-10c8c680c998" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 12 16:31:15 crc kubenswrapper[4693]: E1212 16:31:15.944018 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15772132-7104-4b40-b3a2-c38c2058b269" containerName="registry-server" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.944081 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="15772132-7104-4b40-b3a2-c38c2058b269" containerName="registry-server" Dec 12 16:31:15 crc kubenswrapper[4693]: E1212 16:31:15.944235 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15772132-7104-4b40-b3a2-c38c2058b269" containerName="extract-content" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.944342 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="15772132-7104-4b40-b3a2-c38c2058b269" containerName="extract-content" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.944770 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="15772132-7104-4b40-b3a2-c38c2058b269" containerName="registry-server" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.944884 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecd78dd6-2f3a-4cb0-8966-10c8c680c998" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.946202 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.949932 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.950167 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.950252 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.950545 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.952677 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlgf7" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.952888 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 16:31:15 crc kubenswrapper[4693]: I1212 16:31:15.960096 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw"] Dec 12 16:31:16 crc kubenswrapper[4693]: I1212 16:31:16.016595 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" Dec 12 16:31:16 crc kubenswrapper[4693]: I1212 16:31:16.016926 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" Dec 12 16:31:16 crc kubenswrapper[4693]: I1212 16:31:16.016998 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" Dec 12 16:31:16 crc kubenswrapper[4693]: I1212 16:31:16.017130 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" Dec 12 16:31:16 crc kubenswrapper[4693]: I1212 16:31:16.017239 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b627x\" (UniqueName: \"kubernetes.io/projected/b9306ecf-c57a-439e-a913-ce9fde2688ce-kube-api-access-b627x\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" Dec 12 16:31:16 crc kubenswrapper[4693]: I1212 16:31:16.017303 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" Dec 12 16:31:16 crc kubenswrapper[4693]: I1212 16:31:16.119141 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" Dec 12 16:31:16 crc kubenswrapper[4693]: I1212 16:31:16.119203 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" Dec 12 16:31:16 crc kubenswrapper[4693]: I1212 16:31:16.119286 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" Dec 12 16:31:16 crc kubenswrapper[4693]: I1212 16:31:16.119374 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b627x\" (UniqueName: \"kubernetes.io/projected/b9306ecf-c57a-439e-a913-ce9fde2688ce-kube-api-access-b627x\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" Dec 12 16:31:16 crc kubenswrapper[4693]: I1212 16:31:16.119421 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" Dec 12 16:31:16 crc kubenswrapper[4693]: I1212 16:31:16.119524 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" Dec 12 16:31:16 crc kubenswrapper[4693]: I1212 16:31:16.124966 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" Dec 12 16:31:16 crc kubenswrapper[4693]: I1212 16:31:16.124966 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" Dec 12 16:31:16 crc kubenswrapper[4693]: I1212 16:31:16.125035 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" Dec 12 16:31:16 crc kubenswrapper[4693]: I1212 16:31:16.125235 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" Dec 12 16:31:16 crc kubenswrapper[4693]: I1212 16:31:16.127579 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" Dec 12 16:31:16 crc kubenswrapper[4693]: I1212 16:31:16.136092 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b627x\" (UniqueName: \"kubernetes.io/projected/b9306ecf-c57a-439e-a913-ce9fde2688ce-kube-api-access-b627x\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" Dec 12 16:31:16 crc kubenswrapper[4693]: I1212 16:31:16.267179 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" Dec 12 16:31:16 crc kubenswrapper[4693]: W1212 16:31:16.852666 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9306ecf_c57a_439e_a913_ce9fde2688ce.slice/crio-361ace8d2d83867d3a040e4e067e41d240e985ae6a02d722d4b2d962beccc17f WatchSource:0}: Error finding container 361ace8d2d83867d3a040e4e067e41d240e985ae6a02d722d4b2d962beccc17f: Status 404 returned error can't find the container with id 361ace8d2d83867d3a040e4e067e41d240e985ae6a02d722d4b2d962beccc17f Dec 12 16:31:16 crc kubenswrapper[4693]: I1212 16:31:16.855408 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw"] Dec 12 16:31:17 crc kubenswrapper[4693]: I1212 16:31:17.804861 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" event={"ID":"b9306ecf-c57a-439e-a913-ce9fde2688ce","Type":"ContainerStarted","Data":"361ace8d2d83867d3a040e4e067e41d240e985ae6a02d722d4b2d962beccc17f"} Dec 12 16:31:18 crc kubenswrapper[4693]: I1212 16:31:18.823372 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" event={"ID":"b9306ecf-c57a-439e-a913-ce9fde2688ce","Type":"ContainerStarted","Data":"791c04e014a91a15049ccb2caaf32527816d3c82f8a60f971a2a92e470447e7e"} Dec 12 16:31:18 crc kubenswrapper[4693]: I1212 16:31:18.858388 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" podStartSLOduration=3.194503312 podStartE2EDuration="3.858365003s" podCreationTimestamp="2025-12-12 16:31:15 +0000 UTC" firstStartedPulling="2025-12-12 16:31:16.855296562 +0000 UTC m=+2704.023936163" lastFinishedPulling="2025-12-12 16:31:17.519158253 +0000 UTC m=+2704.687797854" observedRunningTime="2025-12-12 16:31:18.844699064 +0000 UTC m=+2706.013338685" watchObservedRunningTime="2025-12-12 16:31:18.858365003 +0000 UTC m=+2706.027004604" Dec 12 16:31:33 crc kubenswrapper[4693]: I1212 16:31:33.750117 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-znrzn"] Dec 12 16:31:33 crc kubenswrapper[4693]: I1212 16:31:33.755684 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znrzn" Dec 12 16:31:33 crc kubenswrapper[4693]: I1212 16:31:33.773823 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-znrzn"] Dec 12 16:31:33 crc kubenswrapper[4693]: I1212 16:31:33.907037 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c531fda-423d-4afc-b069-c59e2cad695d-catalog-content\") pod \"redhat-marketplace-znrzn\" (UID: \"9c531fda-423d-4afc-b069-c59e2cad695d\") " pod="openshift-marketplace/redhat-marketplace-znrzn" Dec 12 16:31:33 crc kubenswrapper[4693]: I1212 16:31:33.907385 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c531fda-423d-4afc-b069-c59e2cad695d-utilities\") pod \"redhat-marketplace-znrzn\" (UID: \"9c531fda-423d-4afc-b069-c59e2cad695d\") " pod="openshift-marketplace/redhat-marketplace-znrzn" Dec 12 16:31:33 crc kubenswrapper[4693]: I1212 16:31:33.907452 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl42z\" (UniqueName: \"kubernetes.io/projected/9c531fda-423d-4afc-b069-c59e2cad695d-kube-api-access-nl42z\") pod \"redhat-marketplace-znrzn\" (UID: \"9c531fda-423d-4afc-b069-c59e2cad695d\") " pod="openshift-marketplace/redhat-marketplace-znrzn" Dec 12 16:31:34 crc kubenswrapper[4693]: I1212 16:31:34.009174 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c531fda-423d-4afc-b069-c59e2cad695d-catalog-content\") pod \"redhat-marketplace-znrzn\" (UID: \"9c531fda-423d-4afc-b069-c59e2cad695d\") " pod="openshift-marketplace/redhat-marketplace-znrzn" Dec 12 16:31:34 crc kubenswrapper[4693]: I1212 16:31:34.009343 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c531fda-423d-4afc-b069-c59e2cad695d-utilities\") pod \"redhat-marketplace-znrzn\" (UID: \"9c531fda-423d-4afc-b069-c59e2cad695d\") " pod="openshift-marketplace/redhat-marketplace-znrzn" Dec 12 16:31:34 crc kubenswrapper[4693]: I1212 16:31:34.009398 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl42z\" (UniqueName: \"kubernetes.io/projected/9c531fda-423d-4afc-b069-c59e2cad695d-kube-api-access-nl42z\") pod \"redhat-marketplace-znrzn\" (UID: \"9c531fda-423d-4afc-b069-c59e2cad695d\") " pod="openshift-marketplace/redhat-marketplace-znrzn" Dec 12 16:31:34 crc kubenswrapper[4693]: I1212 16:31:34.009923 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c531fda-423d-4afc-b069-c59e2cad695d-utilities\") pod \"redhat-marketplace-znrzn\" (UID: \"9c531fda-423d-4afc-b069-c59e2cad695d\") " pod="openshift-marketplace/redhat-marketplace-znrzn" Dec 12 16:31:34 crc kubenswrapper[4693]: I1212 16:31:34.009932 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c531fda-423d-4afc-b069-c59e2cad695d-catalog-content\") pod \"redhat-marketplace-znrzn\" (UID: \"9c531fda-423d-4afc-b069-c59e2cad695d\") " pod="openshift-marketplace/redhat-marketplace-znrzn" Dec 12 16:31:34 crc kubenswrapper[4693]: I1212 16:31:34.032090 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl42z\" (UniqueName: \"kubernetes.io/projected/9c531fda-423d-4afc-b069-c59e2cad695d-kube-api-access-nl42z\") pod \"redhat-marketplace-znrzn\" (UID: \"9c531fda-423d-4afc-b069-c59e2cad695d\") " pod="openshift-marketplace/redhat-marketplace-znrzn" Dec 12 16:31:34 crc kubenswrapper[4693]: I1212 16:31:34.089870 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znrzn" Dec 12 16:31:34 crc kubenswrapper[4693]: I1212 16:31:34.713928 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-znrzn"] Dec 12 16:31:35 crc kubenswrapper[4693]: I1212 16:31:35.015165 4693 generic.go:334] "Generic (PLEG): container finished" podID="9c531fda-423d-4afc-b069-c59e2cad695d" containerID="ea3cff6ab0a6fb66ce729a41cdf7e75ac552ab423be4e74b561486cf92f6bf01" exitCode=0 Dec 12 16:31:35 crc kubenswrapper[4693]: I1212 16:31:35.015286 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znrzn" event={"ID":"9c531fda-423d-4afc-b069-c59e2cad695d","Type":"ContainerDied","Data":"ea3cff6ab0a6fb66ce729a41cdf7e75ac552ab423be4e74b561486cf92f6bf01"} Dec 12 16:31:35 crc kubenswrapper[4693]: I1212 16:31:35.015572 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znrzn" event={"ID":"9c531fda-423d-4afc-b069-c59e2cad695d","Type":"ContainerStarted","Data":"ad900290e62927bcc305c08532d282c5cd27b7a96b12af3e23772f0fc2dc1cef"} Dec 12 16:31:35 crc kubenswrapper[4693]: I1212 16:31:35.017673 4693 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 16:31:36 crc kubenswrapper[4693]: I1212 16:31:36.026590 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znrzn" event={"ID":"9c531fda-423d-4afc-b069-c59e2cad695d","Type":"ContainerStarted","Data":"699554bef6e866b37b9d21f1351baa4bbd5c6a0968898f12e59adc4be269b507"} Dec 12 16:31:37 crc kubenswrapper[4693]: I1212 16:31:37.040042 4693 generic.go:334] "Generic (PLEG): container finished" podID="9c531fda-423d-4afc-b069-c59e2cad695d" containerID="699554bef6e866b37b9d21f1351baa4bbd5c6a0968898f12e59adc4be269b507" exitCode=0 Dec 12 16:31:37 crc kubenswrapper[4693]: I1212 16:31:37.040138 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znrzn" event={"ID":"9c531fda-423d-4afc-b069-c59e2cad695d","Type":"ContainerDied","Data":"699554bef6e866b37b9d21f1351baa4bbd5c6a0968898f12e59adc4be269b507"} Dec 12 16:31:39 crc kubenswrapper[4693]: I1212 16:31:39.060833 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znrzn" event={"ID":"9c531fda-423d-4afc-b069-c59e2cad695d","Type":"ContainerStarted","Data":"488023fb0192ac475deff4bec43f2bf070f683ed1be4534af321fa81c058153d"} Dec 12 16:31:39 crc kubenswrapper[4693]: I1212 16:31:39.080293 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-znrzn" podStartSLOduration=2.354455522 podStartE2EDuration="6.08025693s" podCreationTimestamp="2025-12-12 16:31:33 +0000 UTC" firstStartedPulling="2025-12-12 16:31:35.017342569 +0000 UTC m=+2722.185982180" lastFinishedPulling="2025-12-12 16:31:38.743143997 +0000 UTC m=+2725.911783588" observedRunningTime="2025-12-12 16:31:39.078864533 +0000 UTC m=+2726.247504164" watchObservedRunningTime="2025-12-12 16:31:39.08025693 +0000 UTC m=+2726.248896531" Dec 12 16:31:44 crc kubenswrapper[4693]: I1212 16:31:44.090780 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-znrzn" Dec 12 16:31:44 crc kubenswrapper[4693]: I1212 16:31:44.091140 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-znrzn" Dec 12 16:31:44 crc kubenswrapper[4693]: I1212 16:31:44.146762 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-znrzn" Dec 12 16:31:44 crc kubenswrapper[4693]: I1212 16:31:44.217157 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-znrzn" Dec 12 16:31:44 crc kubenswrapper[4693]: I1212 16:31:44.388076 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-znrzn"] Dec 12 16:31:46 crc kubenswrapper[4693]: I1212 16:31:46.130169 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-znrzn" podUID="9c531fda-423d-4afc-b069-c59e2cad695d" containerName="registry-server" containerID="cri-o://488023fb0192ac475deff4bec43f2bf070f683ed1be4534af321fa81c058153d" gracePeriod=2 Dec 12 16:31:46 crc kubenswrapper[4693]: I1212 16:31:46.755647 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znrzn" Dec 12 16:31:46 crc kubenswrapper[4693]: I1212 16:31:46.841477 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl42z\" (UniqueName: \"kubernetes.io/projected/9c531fda-423d-4afc-b069-c59e2cad695d-kube-api-access-nl42z\") pod \"9c531fda-423d-4afc-b069-c59e2cad695d\" (UID: \"9c531fda-423d-4afc-b069-c59e2cad695d\") " Dec 12 16:31:46 crc kubenswrapper[4693]: I1212 16:31:46.841561 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c531fda-423d-4afc-b069-c59e2cad695d-catalog-content\") pod \"9c531fda-423d-4afc-b069-c59e2cad695d\" (UID: \"9c531fda-423d-4afc-b069-c59e2cad695d\") " Dec 12 16:31:46 crc kubenswrapper[4693]: I1212 16:31:46.841968 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c531fda-423d-4afc-b069-c59e2cad695d-utilities\") pod \"9c531fda-423d-4afc-b069-c59e2cad695d\" (UID: \"9c531fda-423d-4afc-b069-c59e2cad695d\") " Dec 12 16:31:46 crc kubenswrapper[4693]: I1212 16:31:46.842612 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c531fda-423d-4afc-b069-c59e2cad695d-utilities" (OuterVolumeSpecName: "utilities") pod "9c531fda-423d-4afc-b069-c59e2cad695d" (UID: "9c531fda-423d-4afc-b069-c59e2cad695d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:31:46 crc kubenswrapper[4693]: I1212 16:31:46.843108 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c531fda-423d-4afc-b069-c59e2cad695d-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:31:46 crc kubenswrapper[4693]: I1212 16:31:46.847571 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c531fda-423d-4afc-b069-c59e2cad695d-kube-api-access-nl42z" (OuterVolumeSpecName: "kube-api-access-nl42z") pod "9c531fda-423d-4afc-b069-c59e2cad695d" (UID: "9c531fda-423d-4afc-b069-c59e2cad695d"). InnerVolumeSpecName "kube-api-access-nl42z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:31:46 crc kubenswrapper[4693]: I1212 16:31:46.861851 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c531fda-423d-4afc-b069-c59e2cad695d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c531fda-423d-4afc-b069-c59e2cad695d" (UID: "9c531fda-423d-4afc-b069-c59e2cad695d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:31:46 crc kubenswrapper[4693]: I1212 16:31:46.946864 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl42z\" (UniqueName: \"kubernetes.io/projected/9c531fda-423d-4afc-b069-c59e2cad695d-kube-api-access-nl42z\") on node \"crc\" DevicePath \"\"" Dec 12 16:31:46 crc kubenswrapper[4693]: I1212 16:31:46.947481 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c531fda-423d-4afc-b069-c59e2cad695d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:31:47 crc kubenswrapper[4693]: I1212 16:31:47.143078 4693 generic.go:334] "Generic (PLEG): container finished" podID="9c531fda-423d-4afc-b069-c59e2cad695d" containerID="488023fb0192ac475deff4bec43f2bf070f683ed1be4534af321fa81c058153d" exitCode=0 Dec 12 16:31:47 crc kubenswrapper[4693]: I1212 16:31:47.143143 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znrzn" event={"ID":"9c531fda-423d-4afc-b069-c59e2cad695d","Type":"ContainerDied","Data":"488023fb0192ac475deff4bec43f2bf070f683ed1be4534af321fa81c058153d"} Dec 12 16:31:47 crc kubenswrapper[4693]: I1212 16:31:47.143195 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znrzn" event={"ID":"9c531fda-423d-4afc-b069-c59e2cad695d","Type":"ContainerDied","Data":"ad900290e62927bcc305c08532d282c5cd27b7a96b12af3e23772f0fc2dc1cef"} Dec 12 16:31:47 crc kubenswrapper[4693]: I1212 16:31:47.143222 4693 scope.go:117] "RemoveContainer" containerID="488023fb0192ac475deff4bec43f2bf070f683ed1be4534af321fa81c058153d" Dec 12 16:31:47 crc kubenswrapper[4693]: I1212 16:31:47.143155 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znrzn" Dec 12 16:31:47 crc kubenswrapper[4693]: I1212 16:31:47.181023 4693 scope.go:117] "RemoveContainer" containerID="699554bef6e866b37b9d21f1351baa4bbd5c6a0968898f12e59adc4be269b507" Dec 12 16:31:47 crc kubenswrapper[4693]: I1212 16:31:47.199248 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-znrzn"] Dec 12 16:31:47 crc kubenswrapper[4693]: I1212 16:31:47.214692 4693 scope.go:117] "RemoveContainer" containerID="ea3cff6ab0a6fb66ce729a41cdf7e75ac552ab423be4e74b561486cf92f6bf01" Dec 12 16:31:47 crc kubenswrapper[4693]: I1212 16:31:47.216374 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-znrzn"] Dec 12 16:31:47 crc kubenswrapper[4693]: I1212 16:31:47.273319 4693 scope.go:117] "RemoveContainer" containerID="488023fb0192ac475deff4bec43f2bf070f683ed1be4534af321fa81c058153d" Dec 12 16:31:47 crc kubenswrapper[4693]: E1212 16:31:47.273930 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"488023fb0192ac475deff4bec43f2bf070f683ed1be4534af321fa81c058153d\": container with ID starting with 488023fb0192ac475deff4bec43f2bf070f683ed1be4534af321fa81c058153d not found: ID does not exist" containerID="488023fb0192ac475deff4bec43f2bf070f683ed1be4534af321fa81c058153d" Dec 12 16:31:47 crc kubenswrapper[4693]: I1212 16:31:47.273992 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"488023fb0192ac475deff4bec43f2bf070f683ed1be4534af321fa81c058153d"} err="failed to get container status \"488023fb0192ac475deff4bec43f2bf070f683ed1be4534af321fa81c058153d\": rpc error: code = NotFound desc = could not find container \"488023fb0192ac475deff4bec43f2bf070f683ed1be4534af321fa81c058153d\": container with ID starting with 488023fb0192ac475deff4bec43f2bf070f683ed1be4534af321fa81c058153d not found: ID does not exist" Dec 12 16:31:47 crc kubenswrapper[4693]: I1212 16:31:47.274029 4693 scope.go:117] "RemoveContainer" containerID="699554bef6e866b37b9d21f1351baa4bbd5c6a0968898f12e59adc4be269b507" Dec 12 16:31:47 crc kubenswrapper[4693]: E1212 16:31:47.274581 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"699554bef6e866b37b9d21f1351baa4bbd5c6a0968898f12e59adc4be269b507\": container with ID starting with 699554bef6e866b37b9d21f1351baa4bbd5c6a0968898f12e59adc4be269b507 not found: ID does not exist" containerID="699554bef6e866b37b9d21f1351baa4bbd5c6a0968898f12e59adc4be269b507" Dec 12 16:31:47 crc kubenswrapper[4693]: I1212 16:31:47.274619 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"699554bef6e866b37b9d21f1351baa4bbd5c6a0968898f12e59adc4be269b507"} err="failed to get container status \"699554bef6e866b37b9d21f1351baa4bbd5c6a0968898f12e59adc4be269b507\": rpc error: code = NotFound desc = could not find container \"699554bef6e866b37b9d21f1351baa4bbd5c6a0968898f12e59adc4be269b507\": container with ID starting with 699554bef6e866b37b9d21f1351baa4bbd5c6a0968898f12e59adc4be269b507 not found: ID does not exist" Dec 12 16:31:47 crc kubenswrapper[4693]: I1212 16:31:47.274634 4693 scope.go:117] "RemoveContainer" containerID="ea3cff6ab0a6fb66ce729a41cdf7e75ac552ab423be4e74b561486cf92f6bf01" Dec 12 16:31:47 crc kubenswrapper[4693]: E1212 16:31:47.274890 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea3cff6ab0a6fb66ce729a41cdf7e75ac552ab423be4e74b561486cf92f6bf01\": container with ID starting with ea3cff6ab0a6fb66ce729a41cdf7e75ac552ab423be4e74b561486cf92f6bf01 not found: ID does not exist" containerID="ea3cff6ab0a6fb66ce729a41cdf7e75ac552ab423be4e74b561486cf92f6bf01" Dec 12 16:31:47 crc kubenswrapper[4693]: I1212 16:31:47.274944 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea3cff6ab0a6fb66ce729a41cdf7e75ac552ab423be4e74b561486cf92f6bf01"} err="failed to get container status \"ea3cff6ab0a6fb66ce729a41cdf7e75ac552ab423be4e74b561486cf92f6bf01\": rpc error: code = NotFound desc = could not find container \"ea3cff6ab0a6fb66ce729a41cdf7e75ac552ab423be4e74b561486cf92f6bf01\": container with ID starting with ea3cff6ab0a6fb66ce729a41cdf7e75ac552ab423be4e74b561486cf92f6bf01 not found: ID does not exist" Dec 12 16:31:47 crc kubenswrapper[4693]: I1212 16:31:47.373708 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c531fda-423d-4afc-b069-c59e2cad695d" path="/var/lib/kubelet/pods/9c531fda-423d-4afc-b069-c59e2cad695d/volumes" Dec 12 16:32:10 crc kubenswrapper[4693]: I1212 16:32:10.418749 4693 generic.go:334] "Generic (PLEG): container finished" podID="b9306ecf-c57a-439e-a913-ce9fde2688ce" containerID="791c04e014a91a15049ccb2caaf32527816d3c82f8a60f971a2a92e470447e7e" exitCode=0 Dec 12 16:32:10 crc kubenswrapper[4693]: I1212 16:32:10.418835 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" event={"ID":"b9306ecf-c57a-439e-a913-ce9fde2688ce","Type":"ContainerDied","Data":"791c04e014a91a15049ccb2caaf32527816d3c82f8a60f971a2a92e470447e7e"} Dec 12 16:32:11 crc kubenswrapper[4693]: I1212 16:32:11.904809 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" Dec 12 16:32:11 crc kubenswrapper[4693]: I1212 16:32:11.944297 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-inventory\") pod \"b9306ecf-c57a-439e-a913-ce9fde2688ce\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " Dec 12 16:32:11 crc kubenswrapper[4693]: I1212 16:32:11.944478 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-nova-metadata-neutron-config-0\") pod \"b9306ecf-c57a-439e-a913-ce9fde2688ce\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " Dec 12 16:32:11 crc kubenswrapper[4693]: I1212 16:32:11.944546 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-ssh-key\") pod \"b9306ecf-c57a-439e-a913-ce9fde2688ce\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " Dec 12 16:32:11 crc kubenswrapper[4693]: I1212 16:32:11.944657 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-neutron-metadata-combined-ca-bundle\") pod \"b9306ecf-c57a-439e-a913-ce9fde2688ce\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " Dec 12 16:32:11 crc kubenswrapper[4693]: I1212 16:32:11.944787 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b627x\" (UniqueName: \"kubernetes.io/projected/b9306ecf-c57a-439e-a913-ce9fde2688ce-kube-api-access-b627x\") pod \"b9306ecf-c57a-439e-a913-ce9fde2688ce\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " Dec 12 16:32:11 crc kubenswrapper[4693]: I1212 16:32:11.944904 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-neutron-ovn-metadata-agent-neutron-config-0\") pod \"b9306ecf-c57a-439e-a913-ce9fde2688ce\" (UID: \"b9306ecf-c57a-439e-a913-ce9fde2688ce\") " Dec 12 16:32:11 crc kubenswrapper[4693]: I1212 16:32:11.951723 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "b9306ecf-c57a-439e-a913-ce9fde2688ce" (UID: "b9306ecf-c57a-439e-a913-ce9fde2688ce"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:32:11 crc kubenswrapper[4693]: I1212 16:32:11.972375 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9306ecf-c57a-439e-a913-ce9fde2688ce-kube-api-access-b627x" (OuterVolumeSpecName: "kube-api-access-b627x") pod "b9306ecf-c57a-439e-a913-ce9fde2688ce" (UID: "b9306ecf-c57a-439e-a913-ce9fde2688ce"). InnerVolumeSpecName "kube-api-access-b627x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:32:11 crc kubenswrapper[4693]: I1212 16:32:11.985806 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "b9306ecf-c57a-439e-a913-ce9fde2688ce" (UID: "b9306ecf-c57a-439e-a913-ce9fde2688ce"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:32:11 crc kubenswrapper[4693]: I1212 16:32:11.992965 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b9306ecf-c57a-439e-a913-ce9fde2688ce" (UID: "b9306ecf-c57a-439e-a913-ce9fde2688ce"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:32:11 crc kubenswrapper[4693]: I1212 16:32:11.996881 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-inventory" (OuterVolumeSpecName: "inventory") pod "b9306ecf-c57a-439e-a913-ce9fde2688ce" (UID: "b9306ecf-c57a-439e-a913-ce9fde2688ce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.012644 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "b9306ecf-c57a-439e-a913-ce9fde2688ce" (UID: "b9306ecf-c57a-439e-a913-ce9fde2688ce"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.048075 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b627x\" (UniqueName: \"kubernetes.io/projected/b9306ecf-c57a-439e-a913-ce9fde2688ce-kube-api-access-b627x\") on node \"crc\" DevicePath \"\"" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.048116 4693 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.048132 4693 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.048144 4693 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.048156 4693 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.048167 4693 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9306ecf-c57a-439e-a913-ce9fde2688ce-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.438937 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" event={"ID":"b9306ecf-c57a-439e-a913-ce9fde2688ce","Type":"ContainerDied","Data":"361ace8d2d83867d3a040e4e067e41d240e985ae6a02d722d4b2d962beccc17f"} Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.438975 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="361ace8d2d83867d3a040e4e067e41d240e985ae6a02d722d4b2d962beccc17f" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.439026 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mzhlw" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.530165 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.530534 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.537118 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm"] Dec 12 16:32:12 crc kubenswrapper[4693]: E1212 16:32:12.538246 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c531fda-423d-4afc-b069-c59e2cad695d" containerName="extract-content" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.538290 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c531fda-423d-4afc-b069-c59e2cad695d" containerName="extract-content" Dec 12 16:32:12 crc kubenswrapper[4693]: E1212 16:32:12.538450 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c531fda-423d-4afc-b069-c59e2cad695d" containerName="registry-server" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.538462 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c531fda-423d-4afc-b069-c59e2cad695d" containerName="registry-server" Dec 12 16:32:12 crc kubenswrapper[4693]: E1212 16:32:12.538479 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9306ecf-c57a-439e-a913-ce9fde2688ce" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.538488 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9306ecf-c57a-439e-a913-ce9fde2688ce" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 12 16:32:12 crc kubenswrapper[4693]: E1212 16:32:12.538509 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c531fda-423d-4afc-b069-c59e2cad695d" containerName="extract-utilities" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.538517 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c531fda-423d-4afc-b069-c59e2cad695d" containerName="extract-utilities" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.538831 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c531fda-423d-4afc-b069-c59e2cad695d" containerName="registry-server" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.538872 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9306ecf-c57a-439e-a913-ce9fde2688ce" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.540081 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.543852 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.544056 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlgf7" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.544232 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.544477 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.544619 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.550865 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm"] Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.662685 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm\" (UID: \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.662797 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm\" (UID: \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.662892 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm\" (UID: \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.662934 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b69c8\" (UniqueName: \"kubernetes.io/projected/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-kube-api-access-b69c8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm\" (UID: \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.663056 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm\" (UID: \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.765960 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm\" (UID: \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.766045 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm\" (UID: \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.766101 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b69c8\" (UniqueName: \"kubernetes.io/projected/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-kube-api-access-b69c8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm\" (UID: \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.766131 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm\" (UID: \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.766228 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm\" (UID: \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.772097 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm\" (UID: \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.772698 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm\" (UID: \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.773014 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm\" (UID: \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.780826 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm\" (UID: \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.782841 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b69c8\" (UniqueName: \"kubernetes.io/projected/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-kube-api-access-b69c8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm\" (UID: \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" Dec 12 16:32:12 crc kubenswrapper[4693]: I1212 16:32:12.895893 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" Dec 12 16:32:13 crc kubenswrapper[4693]: I1212 16:32:13.425825 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm"] Dec 12 16:32:13 crc kubenswrapper[4693]: I1212 16:32:13.449769 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" event={"ID":"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a","Type":"ContainerStarted","Data":"a0464e9df4710afa2e3018cbb3b40971b50c371cd4af35bed03d456b1007fd31"} Dec 12 16:32:14 crc kubenswrapper[4693]: I1212 16:32:14.362093 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 16:32:15 crc kubenswrapper[4693]: I1212 16:32:15.474924 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" event={"ID":"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a","Type":"ContainerStarted","Data":"ac6a4c77bc38600794bf46bc693f3353434169edfea1eae57996ab49a9b1976e"} Dec 12 16:32:15 crc kubenswrapper[4693]: I1212 16:32:15.499596 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" podStartSLOduration=2.573925878 podStartE2EDuration="3.499578911s" podCreationTimestamp="2025-12-12 16:32:12 +0000 UTC" firstStartedPulling="2025-12-12 16:32:13.434185874 +0000 UTC m=+2760.602825475" lastFinishedPulling="2025-12-12 16:32:14.359838907 +0000 UTC m=+2761.528478508" observedRunningTime="2025-12-12 16:32:15.496112888 +0000 UTC m=+2762.664752489" watchObservedRunningTime="2025-12-12 16:32:15.499578911 +0000 UTC m=+2762.668218512" Dec 12 16:32:18 crc kubenswrapper[4693]: I1212 16:32:18.893229 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ns9pq"] Dec 12 16:32:18 crc kubenswrapper[4693]: I1212 16:32:18.896382 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ns9pq" Dec 12 16:32:18 crc kubenswrapper[4693]: I1212 16:32:18.946235 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ns9pq"] Dec 12 16:32:18 crc kubenswrapper[4693]: I1212 16:32:18.948314 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0-catalog-content\") pod \"community-operators-ns9pq\" (UID: \"7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0\") " pod="openshift-marketplace/community-operators-ns9pq" Dec 12 16:32:18 crc kubenswrapper[4693]: I1212 16:32:18.948460 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmkfq\" (UniqueName: \"kubernetes.io/projected/7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0-kube-api-access-tmkfq\") pod \"community-operators-ns9pq\" (UID: \"7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0\") " pod="openshift-marketplace/community-operators-ns9pq" Dec 12 16:32:18 crc kubenswrapper[4693]: I1212 16:32:18.948498 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0-utilities\") pod \"community-operators-ns9pq\" (UID: \"7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0\") " pod="openshift-marketplace/community-operators-ns9pq" Dec 12 16:32:19 crc kubenswrapper[4693]: I1212 16:32:19.051301 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0-catalog-content\") pod \"community-operators-ns9pq\" (UID: \"7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0\") " pod="openshift-marketplace/community-operators-ns9pq" Dec 12 16:32:19 crc kubenswrapper[4693]: I1212 16:32:19.051461 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmkfq\" (UniqueName: \"kubernetes.io/projected/7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0-kube-api-access-tmkfq\") pod \"community-operators-ns9pq\" (UID: \"7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0\") " pod="openshift-marketplace/community-operators-ns9pq" Dec 12 16:32:19 crc kubenswrapper[4693]: I1212 16:32:19.051501 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0-utilities\") pod \"community-operators-ns9pq\" (UID: \"7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0\") " pod="openshift-marketplace/community-operators-ns9pq" Dec 12 16:32:19 crc kubenswrapper[4693]: I1212 16:32:19.051898 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0-catalog-content\") pod \"community-operators-ns9pq\" (UID: \"7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0\") " pod="openshift-marketplace/community-operators-ns9pq" Dec 12 16:32:19 crc kubenswrapper[4693]: I1212 16:32:19.052239 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0-utilities\") pod \"community-operators-ns9pq\" (UID: \"7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0\") " pod="openshift-marketplace/community-operators-ns9pq" Dec 12 16:32:19 crc kubenswrapper[4693]: I1212 16:32:19.078572 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmkfq\" (UniqueName: \"kubernetes.io/projected/7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0-kube-api-access-tmkfq\") pod \"community-operators-ns9pq\" (UID: \"7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0\") " pod="openshift-marketplace/community-operators-ns9pq" Dec 12 16:32:19 crc kubenswrapper[4693]: I1212 16:32:19.232189 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ns9pq" Dec 12 16:32:19 crc kubenswrapper[4693]: I1212 16:32:19.871758 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ns9pq"] Dec 12 16:32:20 crc kubenswrapper[4693]: I1212 16:32:20.527090 4693 generic.go:334] "Generic (PLEG): container finished" podID="7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0" containerID="48c538ea3f63b488b304e89917bb38bb4758fc1b77ab8c17aeccab327b2fed02" exitCode=0 Dec 12 16:32:20 crc kubenswrapper[4693]: I1212 16:32:20.527632 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ns9pq" event={"ID":"7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0","Type":"ContainerDied","Data":"48c538ea3f63b488b304e89917bb38bb4758fc1b77ab8c17aeccab327b2fed02"} Dec 12 16:32:20 crc kubenswrapper[4693]: I1212 16:32:20.527671 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ns9pq" event={"ID":"7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0","Type":"ContainerStarted","Data":"f4f6ce119a74a98b28cace2d26211535d7db3aa3af3d319b0df2caade91e1c95"} Dec 12 16:32:21 crc kubenswrapper[4693]: I1212 16:32:21.539821 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ns9pq" event={"ID":"7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0","Type":"ContainerStarted","Data":"82e089462cf1b2ffe3ce1155f58b12372a9bdee1c4082d43cfd988e6da7dda95"} Dec 12 16:32:22 crc kubenswrapper[4693]: E1212 16:32:22.243922 4693 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a9e9932_3ada_4fe3_8fae_cf7f9d563cb0.slice/crio-82e089462cf1b2ffe3ce1155f58b12372a9bdee1c4082d43cfd988e6da7dda95.scope\": RecentStats: unable to find data in memory cache]" Dec 12 16:32:24 crc kubenswrapper[4693]: I1212 16:32:24.572418 4693 generic.go:334] "Generic (PLEG): container finished" podID="7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0" containerID="82e089462cf1b2ffe3ce1155f58b12372a9bdee1c4082d43cfd988e6da7dda95" exitCode=0 Dec 12 16:32:24 crc kubenswrapper[4693]: I1212 16:32:24.572489 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ns9pq" event={"ID":"7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0","Type":"ContainerDied","Data":"82e089462cf1b2ffe3ce1155f58b12372a9bdee1c4082d43cfd988e6da7dda95"} Dec 12 16:32:26 crc kubenswrapper[4693]: I1212 16:32:26.594668 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ns9pq" event={"ID":"7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0","Type":"ContainerStarted","Data":"aa2ddeb5cdb1514c95621e5d1ee4dc69f610ec30684a54b43797f4a3ea44150a"} Dec 12 16:32:26 crc kubenswrapper[4693]: I1212 16:32:26.617090 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ns9pq" podStartSLOduration=3.9352074249999998 podStartE2EDuration="8.617072758s" podCreationTimestamp="2025-12-12 16:32:18 +0000 UTC" firstStartedPulling="2025-12-12 16:32:20.530475843 +0000 UTC m=+2767.699115454" lastFinishedPulling="2025-12-12 16:32:25.212341186 +0000 UTC m=+2772.380980787" observedRunningTime="2025-12-12 16:32:26.610952354 +0000 UTC m=+2773.779591965" watchObservedRunningTime="2025-12-12 16:32:26.617072758 +0000 UTC m=+2773.785712359" Dec 12 16:32:29 crc kubenswrapper[4693]: I1212 16:32:29.232460 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ns9pq" Dec 12 16:32:29 crc kubenswrapper[4693]: I1212 16:32:29.232823 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ns9pq" Dec 12 16:32:29 crc kubenswrapper[4693]: I1212 16:32:29.324679 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ns9pq" Dec 12 16:32:39 crc kubenswrapper[4693]: I1212 16:32:39.302820 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ns9pq" Dec 12 16:32:39 crc kubenswrapper[4693]: I1212 16:32:39.373590 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ns9pq"] Dec 12 16:32:39 crc kubenswrapper[4693]: I1212 16:32:39.731862 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ns9pq" podUID="7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0" containerName="registry-server" containerID="cri-o://aa2ddeb5cdb1514c95621e5d1ee4dc69f610ec30684a54b43797f4a3ea44150a" gracePeriod=2 Dec 12 16:32:40 crc kubenswrapper[4693]: I1212 16:32:40.748427 4693 generic.go:334] "Generic (PLEG): container finished" podID="7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0" containerID="aa2ddeb5cdb1514c95621e5d1ee4dc69f610ec30684a54b43797f4a3ea44150a" exitCode=0 Dec 12 16:32:40 crc kubenswrapper[4693]: I1212 16:32:40.748496 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ns9pq" event={"ID":"7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0","Type":"ContainerDied","Data":"aa2ddeb5cdb1514c95621e5d1ee4dc69f610ec30684a54b43797f4a3ea44150a"} Dec 12 16:32:40 crc kubenswrapper[4693]: I1212 16:32:40.748802 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ns9pq" event={"ID":"7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0","Type":"ContainerDied","Data":"f4f6ce119a74a98b28cace2d26211535d7db3aa3af3d319b0df2caade91e1c95"} Dec 12 16:32:40 crc kubenswrapper[4693]: I1212 16:32:40.748824 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4f6ce119a74a98b28cace2d26211535d7db3aa3af3d319b0df2caade91e1c95" Dec 12 16:32:40 crc kubenswrapper[4693]: I1212 16:32:40.753061 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ns9pq" Dec 12 16:32:40 crc kubenswrapper[4693]: I1212 16:32:40.916432 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0-utilities\") pod \"7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0\" (UID: \"7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0\") " Dec 12 16:32:40 crc kubenswrapper[4693]: I1212 16:32:40.916776 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0-catalog-content\") pod \"7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0\" (UID: \"7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0\") " Dec 12 16:32:40 crc kubenswrapper[4693]: I1212 16:32:40.917016 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmkfq\" (UniqueName: \"kubernetes.io/projected/7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0-kube-api-access-tmkfq\") pod \"7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0\" (UID: \"7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0\") " Dec 12 16:32:40 crc kubenswrapper[4693]: I1212 16:32:40.917854 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0-utilities" (OuterVolumeSpecName: "utilities") pod "7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0" (UID: "7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:32:40 crc kubenswrapper[4693]: I1212 16:32:40.922583 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0-kube-api-access-tmkfq" (OuterVolumeSpecName: "kube-api-access-tmkfq") pod "7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0" (UID: "7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0"). InnerVolumeSpecName "kube-api-access-tmkfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:32:40 crc kubenswrapper[4693]: I1212 16:32:40.971766 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0" (UID: "7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:32:41 crc kubenswrapper[4693]: I1212 16:32:41.020369 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmkfq\" (UniqueName: \"kubernetes.io/projected/7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0-kube-api-access-tmkfq\") on node \"crc\" DevicePath \"\"" Dec 12 16:32:41 crc kubenswrapper[4693]: I1212 16:32:41.020642 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:32:41 crc kubenswrapper[4693]: I1212 16:32:41.020746 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:32:41 crc kubenswrapper[4693]: I1212 16:32:41.758958 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ns9pq" Dec 12 16:32:41 crc kubenswrapper[4693]: I1212 16:32:41.801815 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ns9pq"] Dec 12 16:32:41 crc kubenswrapper[4693]: I1212 16:32:41.809343 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ns9pq"] Dec 12 16:32:42 crc kubenswrapper[4693]: I1212 16:32:42.529994 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:32:42 crc kubenswrapper[4693]: I1212 16:32:42.530058 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:32:43 crc kubenswrapper[4693]: I1212 16:32:43.371125 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0" path="/var/lib/kubelet/pods/7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0/volumes" Dec 12 16:33:12 crc kubenswrapper[4693]: I1212 16:33:12.530473 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:33:12 crc kubenswrapper[4693]: I1212 16:33:12.530932 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:33:12 crc kubenswrapper[4693]: I1212 16:33:12.530990 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 16:33:12 crc kubenswrapper[4693]: I1212 16:33:12.531994 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1805e1e4e83dce627739eb17f054940726e4ffb2a25197f8e6f181e76311c752"} pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:33:12 crc kubenswrapper[4693]: I1212 16:33:12.532075 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" containerID="cri-o://1805e1e4e83dce627739eb17f054940726e4ffb2a25197f8e6f181e76311c752" gracePeriod=600 Dec 12 16:33:13 crc kubenswrapper[4693]: I1212 16:33:13.116206 4693 generic.go:334] "Generic (PLEG): container finished" podID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerID="1805e1e4e83dce627739eb17f054940726e4ffb2a25197f8e6f181e76311c752" exitCode=0 Dec 12 16:33:13 crc kubenswrapper[4693]: I1212 16:33:13.116265 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerDied","Data":"1805e1e4e83dce627739eb17f054940726e4ffb2a25197f8e6f181e76311c752"} Dec 12 16:33:13 crc kubenswrapper[4693]: I1212 16:33:13.116821 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerStarted","Data":"35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237"} Dec 12 16:33:13 crc kubenswrapper[4693]: I1212 16:33:13.116851 4693 scope.go:117] "RemoveContainer" containerID="b9aea2bacc75b11f2763f1de611641d9ab81d0a11318e0d4935a4f5f3fae7d64" Dec 12 16:34:40 crc kubenswrapper[4693]: I1212 16:34:40.655305 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t4wwf"] Dec 12 16:34:40 crc kubenswrapper[4693]: E1212 16:34:40.657578 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0" containerName="registry-server" Dec 12 16:34:40 crc kubenswrapper[4693]: I1212 16:34:40.657679 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0" containerName="registry-server" Dec 12 16:34:40 crc kubenswrapper[4693]: E1212 16:34:40.657785 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0" containerName="extract-content" Dec 12 16:34:40 crc kubenswrapper[4693]: I1212 16:34:40.657867 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0" containerName="extract-content" Dec 12 16:34:40 crc kubenswrapper[4693]: E1212 16:34:40.657964 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0" containerName="extract-utilities" Dec 12 16:34:40 crc kubenswrapper[4693]: I1212 16:34:40.658038 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0" containerName="extract-utilities" Dec 12 16:34:40 crc kubenswrapper[4693]: I1212 16:34:40.658473 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a9e9932-3ada-4fe3-8fae-cf7f9d563cb0" containerName="registry-server" Dec 12 16:34:40 crc kubenswrapper[4693]: I1212 16:34:40.660585 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4wwf" Dec 12 16:34:40 crc kubenswrapper[4693]: I1212 16:34:40.688093 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t4wwf"] Dec 12 16:34:40 crc kubenswrapper[4693]: I1212 16:34:40.717029 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvj52\" (UniqueName: \"kubernetes.io/projected/31085dc6-0e90-4e93-9247-9e1224a9225f-kube-api-access-wvj52\") pod \"certified-operators-t4wwf\" (UID: \"31085dc6-0e90-4e93-9247-9e1224a9225f\") " pod="openshift-marketplace/certified-operators-t4wwf" Dec 12 16:34:40 crc kubenswrapper[4693]: I1212 16:34:40.717181 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31085dc6-0e90-4e93-9247-9e1224a9225f-catalog-content\") pod \"certified-operators-t4wwf\" (UID: \"31085dc6-0e90-4e93-9247-9e1224a9225f\") " pod="openshift-marketplace/certified-operators-t4wwf" Dec 12 16:34:40 crc kubenswrapper[4693]: I1212 16:34:40.717300 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31085dc6-0e90-4e93-9247-9e1224a9225f-utilities\") pod \"certified-operators-t4wwf\" (UID: \"31085dc6-0e90-4e93-9247-9e1224a9225f\") " pod="openshift-marketplace/certified-operators-t4wwf" Dec 12 16:34:40 crc kubenswrapper[4693]: I1212 16:34:40.819646 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31085dc6-0e90-4e93-9247-9e1224a9225f-utilities\") pod \"certified-operators-t4wwf\" (UID: \"31085dc6-0e90-4e93-9247-9e1224a9225f\") " pod="openshift-marketplace/certified-operators-t4wwf" Dec 12 16:34:40 crc kubenswrapper[4693]: I1212 16:34:40.819921 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvj52\" (UniqueName: \"kubernetes.io/projected/31085dc6-0e90-4e93-9247-9e1224a9225f-kube-api-access-wvj52\") pod \"certified-operators-t4wwf\" (UID: \"31085dc6-0e90-4e93-9247-9e1224a9225f\") " pod="openshift-marketplace/certified-operators-t4wwf" Dec 12 16:34:40 crc kubenswrapper[4693]: I1212 16:34:40.820030 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31085dc6-0e90-4e93-9247-9e1224a9225f-catalog-content\") pod \"certified-operators-t4wwf\" (UID: \"31085dc6-0e90-4e93-9247-9e1224a9225f\") " pod="openshift-marketplace/certified-operators-t4wwf" Dec 12 16:34:40 crc kubenswrapper[4693]: I1212 16:34:40.820147 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31085dc6-0e90-4e93-9247-9e1224a9225f-utilities\") pod \"certified-operators-t4wwf\" (UID: \"31085dc6-0e90-4e93-9247-9e1224a9225f\") " pod="openshift-marketplace/certified-operators-t4wwf" Dec 12 16:34:40 crc kubenswrapper[4693]: I1212 16:34:40.820394 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31085dc6-0e90-4e93-9247-9e1224a9225f-catalog-content\") pod \"certified-operators-t4wwf\" (UID: \"31085dc6-0e90-4e93-9247-9e1224a9225f\") " pod="openshift-marketplace/certified-operators-t4wwf" Dec 12 16:34:40 crc kubenswrapper[4693]: I1212 16:34:40.850229 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvj52\" (UniqueName: \"kubernetes.io/projected/31085dc6-0e90-4e93-9247-9e1224a9225f-kube-api-access-wvj52\") pod \"certified-operators-t4wwf\" (UID: \"31085dc6-0e90-4e93-9247-9e1224a9225f\") " pod="openshift-marketplace/certified-operators-t4wwf" Dec 12 16:34:40 crc kubenswrapper[4693]: I1212 16:34:40.994690 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4wwf" Dec 12 16:34:41 crc kubenswrapper[4693]: I1212 16:34:41.528473 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t4wwf"] Dec 12 16:34:42 crc kubenswrapper[4693]: I1212 16:34:42.219311 4693 generic.go:334] "Generic (PLEG): container finished" podID="31085dc6-0e90-4e93-9247-9e1224a9225f" containerID="4656ce03dd403222daabab3f5405707016f5aad21f20dde8210a1913c4694961" exitCode=0 Dec 12 16:34:42 crc kubenswrapper[4693]: I1212 16:34:42.219369 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4wwf" event={"ID":"31085dc6-0e90-4e93-9247-9e1224a9225f","Type":"ContainerDied","Data":"4656ce03dd403222daabab3f5405707016f5aad21f20dde8210a1913c4694961"} Dec 12 16:34:42 crc kubenswrapper[4693]: I1212 16:34:42.219652 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4wwf" event={"ID":"31085dc6-0e90-4e93-9247-9e1224a9225f","Type":"ContainerStarted","Data":"72e4717bb94c5eb5ce241cd9b39c4defb056c2525396e505883037e3fea66be7"} Dec 12 16:34:44 crc kubenswrapper[4693]: I1212 16:34:44.242540 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4wwf" event={"ID":"31085dc6-0e90-4e93-9247-9e1224a9225f","Type":"ContainerStarted","Data":"f74fb2034469c581fa70b9bdee927039c1428872babac03731ed73f852bc07dc"} Dec 12 16:34:45 crc kubenswrapper[4693]: I1212 16:34:45.254360 4693 generic.go:334] "Generic (PLEG): container finished" podID="31085dc6-0e90-4e93-9247-9e1224a9225f" containerID="f74fb2034469c581fa70b9bdee927039c1428872babac03731ed73f852bc07dc" exitCode=0 Dec 12 16:34:45 crc kubenswrapper[4693]: I1212 16:34:45.254459 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4wwf" event={"ID":"31085dc6-0e90-4e93-9247-9e1224a9225f","Type":"ContainerDied","Data":"f74fb2034469c581fa70b9bdee927039c1428872babac03731ed73f852bc07dc"} Dec 12 16:34:47 crc kubenswrapper[4693]: I1212 16:34:47.276509 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4wwf" event={"ID":"31085dc6-0e90-4e93-9247-9e1224a9225f","Type":"ContainerStarted","Data":"25dbefc912ccc80d097c238912fc3ad78834adf99e55e0dc0cd2aba04a76904f"} Dec 12 16:34:47 crc kubenswrapper[4693]: I1212 16:34:47.300657 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t4wwf" podStartSLOduration=3.367802844 podStartE2EDuration="7.300635225s" podCreationTimestamp="2025-12-12 16:34:40 +0000 UTC" firstStartedPulling="2025-12-12 16:34:42.22167072 +0000 UTC m=+2909.390310321" lastFinishedPulling="2025-12-12 16:34:46.154503091 +0000 UTC m=+2913.323142702" observedRunningTime="2025-12-12 16:34:47.295632101 +0000 UTC m=+2914.464271722" watchObservedRunningTime="2025-12-12 16:34:47.300635225 +0000 UTC m=+2914.469274826" Dec 12 16:34:50 crc kubenswrapper[4693]: I1212 16:34:50.994890 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t4wwf" Dec 12 16:34:50 crc kubenswrapper[4693]: I1212 16:34:50.995457 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t4wwf" Dec 12 16:34:51 crc kubenswrapper[4693]: I1212 16:34:51.045783 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t4wwf" Dec 12 16:34:51 crc kubenswrapper[4693]: I1212 16:34:51.376588 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t4wwf" Dec 12 16:34:51 crc kubenswrapper[4693]: I1212 16:34:51.425048 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t4wwf"] Dec 12 16:34:53 crc kubenswrapper[4693]: I1212 16:34:53.349953 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t4wwf" podUID="31085dc6-0e90-4e93-9247-9e1224a9225f" containerName="registry-server" containerID="cri-o://25dbefc912ccc80d097c238912fc3ad78834adf99e55e0dc0cd2aba04a76904f" gracePeriod=2 Dec 12 16:34:53 crc kubenswrapper[4693]: I1212 16:34:53.889124 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4wwf" Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.009859 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31085dc6-0e90-4e93-9247-9e1224a9225f-utilities\") pod \"31085dc6-0e90-4e93-9247-9e1224a9225f\" (UID: \"31085dc6-0e90-4e93-9247-9e1224a9225f\") " Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.009989 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31085dc6-0e90-4e93-9247-9e1224a9225f-catalog-content\") pod \"31085dc6-0e90-4e93-9247-9e1224a9225f\" (UID: \"31085dc6-0e90-4e93-9247-9e1224a9225f\") " Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.010137 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvj52\" (UniqueName: \"kubernetes.io/projected/31085dc6-0e90-4e93-9247-9e1224a9225f-kube-api-access-wvj52\") pod \"31085dc6-0e90-4e93-9247-9e1224a9225f\" (UID: \"31085dc6-0e90-4e93-9247-9e1224a9225f\") " Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.011048 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31085dc6-0e90-4e93-9247-9e1224a9225f-utilities" (OuterVolumeSpecName: "utilities") pod "31085dc6-0e90-4e93-9247-9e1224a9225f" (UID: "31085dc6-0e90-4e93-9247-9e1224a9225f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.017522 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31085dc6-0e90-4e93-9247-9e1224a9225f-kube-api-access-wvj52" (OuterVolumeSpecName: "kube-api-access-wvj52") pod "31085dc6-0e90-4e93-9247-9e1224a9225f" (UID: "31085dc6-0e90-4e93-9247-9e1224a9225f"). InnerVolumeSpecName "kube-api-access-wvj52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.066915 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31085dc6-0e90-4e93-9247-9e1224a9225f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31085dc6-0e90-4e93-9247-9e1224a9225f" (UID: "31085dc6-0e90-4e93-9247-9e1224a9225f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.113860 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31085dc6-0e90-4e93-9247-9e1224a9225f-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.113899 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31085dc6-0e90-4e93-9247-9e1224a9225f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.113920 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvj52\" (UniqueName: \"kubernetes.io/projected/31085dc6-0e90-4e93-9247-9e1224a9225f-kube-api-access-wvj52\") on node \"crc\" DevicePath \"\"" Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.369366 4693 generic.go:334] "Generic (PLEG): container finished" podID="31085dc6-0e90-4e93-9247-9e1224a9225f" containerID="25dbefc912ccc80d097c238912fc3ad78834adf99e55e0dc0cd2aba04a76904f" exitCode=0 Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.369424 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4wwf" event={"ID":"31085dc6-0e90-4e93-9247-9e1224a9225f","Type":"ContainerDied","Data":"25dbefc912ccc80d097c238912fc3ad78834adf99e55e0dc0cd2aba04a76904f"} Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.369471 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4wwf" Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.369490 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4wwf" event={"ID":"31085dc6-0e90-4e93-9247-9e1224a9225f","Type":"ContainerDied","Data":"72e4717bb94c5eb5ce241cd9b39c4defb056c2525396e505883037e3fea66be7"} Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.369513 4693 scope.go:117] "RemoveContainer" containerID="25dbefc912ccc80d097c238912fc3ad78834adf99e55e0dc0cd2aba04a76904f" Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.421942 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t4wwf"] Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.427989 4693 scope.go:117] "RemoveContainer" containerID="f74fb2034469c581fa70b9bdee927039c1428872babac03731ed73f852bc07dc" Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.435900 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t4wwf"] Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.464839 4693 scope.go:117] "RemoveContainer" containerID="4656ce03dd403222daabab3f5405707016f5aad21f20dde8210a1913c4694961" Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.531020 4693 scope.go:117] "RemoveContainer" containerID="25dbefc912ccc80d097c238912fc3ad78834adf99e55e0dc0cd2aba04a76904f" Dec 12 16:34:54 crc kubenswrapper[4693]: E1212 16:34:54.531483 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25dbefc912ccc80d097c238912fc3ad78834adf99e55e0dc0cd2aba04a76904f\": container with ID starting with 25dbefc912ccc80d097c238912fc3ad78834adf99e55e0dc0cd2aba04a76904f not found: ID does not exist" containerID="25dbefc912ccc80d097c238912fc3ad78834adf99e55e0dc0cd2aba04a76904f" Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.531524 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25dbefc912ccc80d097c238912fc3ad78834adf99e55e0dc0cd2aba04a76904f"} err="failed to get container status \"25dbefc912ccc80d097c238912fc3ad78834adf99e55e0dc0cd2aba04a76904f\": rpc error: code = NotFound desc = could not find container \"25dbefc912ccc80d097c238912fc3ad78834adf99e55e0dc0cd2aba04a76904f\": container with ID starting with 25dbefc912ccc80d097c238912fc3ad78834adf99e55e0dc0cd2aba04a76904f not found: ID does not exist" Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.531546 4693 scope.go:117] "RemoveContainer" containerID="f74fb2034469c581fa70b9bdee927039c1428872babac03731ed73f852bc07dc" Dec 12 16:34:54 crc kubenswrapper[4693]: E1212 16:34:54.532051 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f74fb2034469c581fa70b9bdee927039c1428872babac03731ed73f852bc07dc\": container with ID starting with f74fb2034469c581fa70b9bdee927039c1428872babac03731ed73f852bc07dc not found: ID does not exist" containerID="f74fb2034469c581fa70b9bdee927039c1428872babac03731ed73f852bc07dc" Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.532236 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74fb2034469c581fa70b9bdee927039c1428872babac03731ed73f852bc07dc"} err="failed to get container status \"f74fb2034469c581fa70b9bdee927039c1428872babac03731ed73f852bc07dc\": rpc error: code = NotFound desc = could not find container \"f74fb2034469c581fa70b9bdee927039c1428872babac03731ed73f852bc07dc\": container with ID starting with f74fb2034469c581fa70b9bdee927039c1428872babac03731ed73f852bc07dc not found: ID does not exist" Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.532452 4693 scope.go:117] "RemoveContainer" containerID="4656ce03dd403222daabab3f5405707016f5aad21f20dde8210a1913c4694961" Dec 12 16:34:54 crc kubenswrapper[4693]: E1212 16:34:54.532859 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4656ce03dd403222daabab3f5405707016f5aad21f20dde8210a1913c4694961\": container with ID starting with 4656ce03dd403222daabab3f5405707016f5aad21f20dde8210a1913c4694961 not found: ID does not exist" containerID="4656ce03dd403222daabab3f5405707016f5aad21f20dde8210a1913c4694961" Dec 12 16:34:54 crc kubenswrapper[4693]: I1212 16:34:54.533006 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4656ce03dd403222daabab3f5405707016f5aad21f20dde8210a1913c4694961"} err="failed to get container status \"4656ce03dd403222daabab3f5405707016f5aad21f20dde8210a1913c4694961\": rpc error: code = NotFound desc = could not find container \"4656ce03dd403222daabab3f5405707016f5aad21f20dde8210a1913c4694961\": container with ID starting with 4656ce03dd403222daabab3f5405707016f5aad21f20dde8210a1913c4694961 not found: ID does not exist" Dec 12 16:34:55 crc kubenswrapper[4693]: I1212 16:34:55.371258 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31085dc6-0e90-4e93-9247-9e1224a9225f" path="/var/lib/kubelet/pods/31085dc6-0e90-4e93-9247-9e1224a9225f/volumes" Dec 12 16:35:12 crc kubenswrapper[4693]: I1212 16:35:12.530107 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:35:12 crc kubenswrapper[4693]: I1212 16:35:12.530561 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:35:42 crc kubenswrapper[4693]: I1212 16:35:42.530299 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:35:42 crc kubenswrapper[4693]: I1212 16:35:42.530824 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:36:12 crc kubenswrapper[4693]: I1212 16:36:12.531059 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:36:12 crc kubenswrapper[4693]: I1212 16:36:12.531740 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:36:12 crc kubenswrapper[4693]: I1212 16:36:12.531814 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 16:36:12 crc kubenswrapper[4693]: I1212 16:36:12.532934 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237"} pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:36:12 crc kubenswrapper[4693]: I1212 16:36:12.533031 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" containerID="cri-o://35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" gracePeriod=600 Dec 12 16:36:12 crc kubenswrapper[4693]: E1212 16:36:12.667303 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:36:13 crc kubenswrapper[4693]: I1212 16:36:13.436819 4693 generic.go:334] "Generic (PLEG): container finished" podID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" exitCode=0 Dec 12 16:36:13 crc kubenswrapper[4693]: I1212 16:36:13.436894 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerDied","Data":"35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237"} Dec 12 16:36:13 crc kubenswrapper[4693]: I1212 16:36:13.437054 4693 scope.go:117] "RemoveContainer" containerID="1805e1e4e83dce627739eb17f054940726e4ffb2a25197f8e6f181e76311c752" Dec 12 16:36:13 crc kubenswrapper[4693]: I1212 16:36:13.438384 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:36:13 crc kubenswrapper[4693]: E1212 16:36:13.440526 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:36:26 crc kubenswrapper[4693]: I1212 16:36:26.357256 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:36:26 crc kubenswrapper[4693]: E1212 16:36:26.358189 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:36:39 crc kubenswrapper[4693]: I1212 16:36:39.357865 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:36:39 crc kubenswrapper[4693]: E1212 16:36:39.358689 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:36:54 crc kubenswrapper[4693]: I1212 16:36:54.357255 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:36:54 crc kubenswrapper[4693]: E1212 16:36:54.358152 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:37:08 crc kubenswrapper[4693]: I1212 16:37:08.358244 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:37:08 crc kubenswrapper[4693]: E1212 16:37:08.359717 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:37:16 crc kubenswrapper[4693]: I1212 16:37:16.321753 4693 generic.go:334] "Generic (PLEG): container finished" podID="a2cacc7e-a0ca-4b32-8591-b06ff9f4026a" containerID="ac6a4c77bc38600794bf46bc693f3353434169edfea1eae57996ab49a9b1976e" exitCode=0 Dec 12 16:37:16 crc kubenswrapper[4693]: I1212 16:37:16.322135 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" event={"ID":"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a","Type":"ContainerDied","Data":"ac6a4c77bc38600794bf46bc693f3353434169edfea1eae57996ab49a9b1976e"} Dec 12 16:37:17 crc kubenswrapper[4693]: I1212 16:37:17.837717 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" Dec 12 16:37:17 crc kubenswrapper[4693]: I1212 16:37:17.948516 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-libvirt-secret-0\") pod \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\" (UID: \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\") " Dec 12 16:37:17 crc kubenswrapper[4693]: I1212 16:37:17.948600 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-ssh-key\") pod \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\" (UID: \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\") " Dec 12 16:37:17 crc kubenswrapper[4693]: I1212 16:37:17.948644 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b69c8\" (UniqueName: \"kubernetes.io/projected/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-kube-api-access-b69c8\") pod \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\" (UID: \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\") " Dec 12 16:37:17 crc kubenswrapper[4693]: I1212 16:37:17.948776 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-libvirt-combined-ca-bundle\") pod \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\" (UID: \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\") " Dec 12 16:37:17 crc kubenswrapper[4693]: I1212 16:37:17.948808 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-inventory\") pod \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\" (UID: \"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a\") " Dec 12 16:37:17 crc kubenswrapper[4693]: I1212 16:37:17.956813 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-kube-api-access-b69c8" (OuterVolumeSpecName: "kube-api-access-b69c8") pod "a2cacc7e-a0ca-4b32-8591-b06ff9f4026a" (UID: "a2cacc7e-a0ca-4b32-8591-b06ff9f4026a"). InnerVolumeSpecName "kube-api-access-b69c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:37:17 crc kubenswrapper[4693]: I1212 16:37:17.958715 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "a2cacc7e-a0ca-4b32-8591-b06ff9f4026a" (UID: "a2cacc7e-a0ca-4b32-8591-b06ff9f4026a"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.014200 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-inventory" (OuterVolumeSpecName: "inventory") pod "a2cacc7e-a0ca-4b32-8591-b06ff9f4026a" (UID: "a2cacc7e-a0ca-4b32-8591-b06ff9f4026a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.015989 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "a2cacc7e-a0ca-4b32-8591-b06ff9f4026a" (UID: "a2cacc7e-a0ca-4b32-8591-b06ff9f4026a"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.027290 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a2cacc7e-a0ca-4b32-8591-b06ff9f4026a" (UID: "a2cacc7e-a0ca-4b32-8591-b06ff9f4026a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.051716 4693 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.051762 4693 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.051778 4693 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.051790 4693 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.051802 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b69c8\" (UniqueName: \"kubernetes.io/projected/a2cacc7e-a0ca-4b32-8591-b06ff9f4026a-kube-api-access-b69c8\") on node \"crc\" DevicePath \"\"" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.342754 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" event={"ID":"a2cacc7e-a0ca-4b32-8591-b06ff9f4026a","Type":"ContainerDied","Data":"a0464e9df4710afa2e3018cbb3b40971b50c371cd4af35bed03d456b1007fd31"} Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.343132 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0464e9df4710afa2e3018cbb3b40971b50c371cd4af35bed03d456b1007fd31" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.342905 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xj8pm" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.523972 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz"] Dec 12 16:37:18 crc kubenswrapper[4693]: E1212 16:37:18.524491 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31085dc6-0e90-4e93-9247-9e1224a9225f" containerName="extract-content" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.524509 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="31085dc6-0e90-4e93-9247-9e1224a9225f" containerName="extract-content" Dec 12 16:37:18 crc kubenswrapper[4693]: E1212 16:37:18.524535 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31085dc6-0e90-4e93-9247-9e1224a9225f" containerName="registry-server" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.524544 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="31085dc6-0e90-4e93-9247-9e1224a9225f" containerName="registry-server" Dec 12 16:37:18 crc kubenswrapper[4693]: E1212 16:37:18.524572 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2cacc7e-a0ca-4b32-8591-b06ff9f4026a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.524579 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2cacc7e-a0ca-4b32-8591-b06ff9f4026a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 12 16:37:18 crc kubenswrapper[4693]: E1212 16:37:18.524593 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31085dc6-0e90-4e93-9247-9e1224a9225f" containerName="extract-utilities" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.524599 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="31085dc6-0e90-4e93-9247-9e1224a9225f" containerName="extract-utilities" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.524851 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2cacc7e-a0ca-4b32-8591-b06ff9f4026a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.524883 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="31085dc6-0e90-4e93-9247-9e1224a9225f" containerName="registry-server" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.525688 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.529077 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.530010 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.530262 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.530594 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.530827 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.531079 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.538358 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlgf7" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.552663 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz"] Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.670303 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.670407 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.670470 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.670564 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.670617 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.670686 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.670794 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.670841 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.670908 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffrzz\" (UniqueName: \"kubernetes.io/projected/485d0f9e-03e0-4aa3-873a-fc9996fb5351-kube-api-access-ffrzz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.772545 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.772605 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.772666 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.772704 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.772738 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.772795 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.772822 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.772846 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffrzz\" (UniqueName: \"kubernetes.io/projected/485d0f9e-03e0-4aa3-873a-fc9996fb5351-kube-api-access-ffrzz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.772931 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.773499 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.779004 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.779378 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.779797 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.780000 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.780323 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.782845 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.784356 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.794405 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffrzz\" (UniqueName: \"kubernetes.io/projected/485d0f9e-03e0-4aa3-873a-fc9996fb5351-kube-api-access-ffrzz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tnrdz\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:18 crc kubenswrapper[4693]: I1212 16:37:18.846902 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:37:19 crc kubenswrapper[4693]: I1212 16:37:19.431773 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz"] Dec 12 16:37:19 crc kubenswrapper[4693]: I1212 16:37:19.433153 4693 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 16:37:20 crc kubenswrapper[4693]: I1212 16:37:20.377925 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" event={"ID":"485d0f9e-03e0-4aa3-873a-fc9996fb5351","Type":"ContainerStarted","Data":"35ddd37f10d1f51fd2b0d991681273dcda4e1e7efcb2237b5dcd01677b29c229"} Dec 12 16:37:21 crc kubenswrapper[4693]: I1212 16:37:21.395641 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" event={"ID":"485d0f9e-03e0-4aa3-873a-fc9996fb5351","Type":"ContainerStarted","Data":"edb5839309140202d9cb7b858246a867edac13a16b643928968f7010be6fb1b5"} Dec 12 16:37:21 crc kubenswrapper[4693]: I1212 16:37:21.429254 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" podStartSLOduration=2.796869268 podStartE2EDuration="3.429238002s" podCreationTimestamp="2025-12-12 16:37:18 +0000 UTC" firstStartedPulling="2025-12-12 16:37:19.432863684 +0000 UTC m=+3066.601503285" lastFinishedPulling="2025-12-12 16:37:20.065232418 +0000 UTC m=+3067.233872019" observedRunningTime="2025-12-12 16:37:21.422485902 +0000 UTC m=+3068.591125503" watchObservedRunningTime="2025-12-12 16:37:21.429238002 +0000 UTC m=+3068.597877603" Dec 12 16:37:23 crc kubenswrapper[4693]: I1212 16:37:23.368603 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:37:23 crc kubenswrapper[4693]: E1212 16:37:23.370324 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:37:38 crc kubenswrapper[4693]: I1212 16:37:38.358717 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:37:38 crc kubenswrapper[4693]: E1212 16:37:38.359835 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:37:52 crc kubenswrapper[4693]: I1212 16:37:52.357553 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:37:52 crc kubenswrapper[4693]: E1212 16:37:52.358356 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:38:05 crc kubenswrapper[4693]: I1212 16:38:05.357692 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:38:05 crc kubenswrapper[4693]: E1212 16:38:05.359293 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:38:18 crc kubenswrapper[4693]: I1212 16:38:18.358034 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:38:18 crc kubenswrapper[4693]: E1212 16:38:18.359541 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:38:31 crc kubenswrapper[4693]: I1212 16:38:31.357297 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:38:31 crc kubenswrapper[4693]: E1212 16:38:31.358152 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:38:46 crc kubenswrapper[4693]: I1212 16:38:46.358848 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:38:46 crc kubenswrapper[4693]: E1212 16:38:46.359695 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:38:57 crc kubenswrapper[4693]: I1212 16:38:57.358885 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:38:57 crc kubenswrapper[4693]: E1212 16:38:57.359833 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:39:00 crc kubenswrapper[4693]: I1212 16:39:00.624622 4693 scope.go:117] "RemoveContainer" containerID="48c538ea3f63b488b304e89917bb38bb4758fc1b77ab8c17aeccab327b2fed02" Dec 12 16:39:00 crc kubenswrapper[4693]: I1212 16:39:00.677537 4693 scope.go:117] "RemoveContainer" containerID="82e089462cf1b2ffe3ce1155f58b12372a9bdee1c4082d43cfd988e6da7dda95" Dec 12 16:39:00 crc kubenswrapper[4693]: I1212 16:39:00.731955 4693 scope.go:117] "RemoveContainer" containerID="aa2ddeb5cdb1514c95621e5d1ee4dc69f610ec30684a54b43797f4a3ea44150a" Dec 12 16:39:08 crc kubenswrapper[4693]: I1212 16:39:08.357286 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:39:08 crc kubenswrapper[4693]: E1212 16:39:08.357925 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:39:19 crc kubenswrapper[4693]: I1212 16:39:19.357925 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:39:19 crc kubenswrapper[4693]: E1212 16:39:19.359787 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:39:30 crc kubenswrapper[4693]: I1212 16:39:30.357039 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:39:30 crc kubenswrapper[4693]: E1212 16:39:30.357970 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:39:41 crc kubenswrapper[4693]: I1212 16:39:41.357225 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:39:41 crc kubenswrapper[4693]: E1212 16:39:41.358302 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:39:56 crc kubenswrapper[4693]: I1212 16:39:56.357351 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:39:56 crc kubenswrapper[4693]: E1212 16:39:56.358223 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:40:09 crc kubenswrapper[4693]: I1212 16:40:09.357695 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:40:09 crc kubenswrapper[4693]: E1212 16:40:09.359738 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:40:23 crc kubenswrapper[4693]: I1212 16:40:23.243658 4693 generic.go:334] "Generic (PLEG): container finished" podID="485d0f9e-03e0-4aa3-873a-fc9996fb5351" containerID="edb5839309140202d9cb7b858246a867edac13a16b643928968f7010be6fb1b5" exitCode=0 Dec 12 16:40:23 crc kubenswrapper[4693]: I1212 16:40:23.243732 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" event={"ID":"485d0f9e-03e0-4aa3-873a-fc9996fb5351","Type":"ContainerDied","Data":"edb5839309140202d9cb7b858246a867edac13a16b643928968f7010be6fb1b5"} Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.356565 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:40:24 crc kubenswrapper[4693]: E1212 16:40:24.357119 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.713130 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.782121 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-migration-ssh-key-0\") pod \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.782170 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-migration-ssh-key-1\") pod \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.782215 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-combined-ca-bundle\") pod \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.782244 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-cell1-compute-config-0\") pod \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.782301 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-extra-config-0\") pod \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.782368 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-cell1-compute-config-1\") pod \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.783095 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffrzz\" (UniqueName: \"kubernetes.io/projected/485d0f9e-03e0-4aa3-873a-fc9996fb5351-kube-api-access-ffrzz\") pod \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.783221 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-inventory\") pod \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.783243 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-ssh-key\") pod \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\" (UID: \"485d0f9e-03e0-4aa3-873a-fc9996fb5351\") " Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.793850 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/485d0f9e-03e0-4aa3-873a-fc9996fb5351-kube-api-access-ffrzz" (OuterVolumeSpecName: "kube-api-access-ffrzz") pod "485d0f9e-03e0-4aa3-873a-fc9996fb5351" (UID: "485d0f9e-03e0-4aa3-873a-fc9996fb5351"). InnerVolumeSpecName "kube-api-access-ffrzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.796140 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "485d0f9e-03e0-4aa3-873a-fc9996fb5351" (UID: "485d0f9e-03e0-4aa3-873a-fc9996fb5351"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.815338 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "485d0f9e-03e0-4aa3-873a-fc9996fb5351" (UID: "485d0f9e-03e0-4aa3-873a-fc9996fb5351"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.816727 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "485d0f9e-03e0-4aa3-873a-fc9996fb5351" (UID: "485d0f9e-03e0-4aa3-873a-fc9996fb5351"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.818249 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "485d0f9e-03e0-4aa3-873a-fc9996fb5351" (UID: "485d0f9e-03e0-4aa3-873a-fc9996fb5351"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.826424 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "485d0f9e-03e0-4aa3-873a-fc9996fb5351" (UID: "485d0f9e-03e0-4aa3-873a-fc9996fb5351"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.831121 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "485d0f9e-03e0-4aa3-873a-fc9996fb5351" (UID: "485d0f9e-03e0-4aa3-873a-fc9996fb5351"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.846503 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-inventory" (OuterVolumeSpecName: "inventory") pod "485d0f9e-03e0-4aa3-873a-fc9996fb5351" (UID: "485d0f9e-03e0-4aa3-873a-fc9996fb5351"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.849319 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "485d0f9e-03e0-4aa3-873a-fc9996fb5351" (UID: "485d0f9e-03e0-4aa3-873a-fc9996fb5351"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.886046 4693 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.886083 4693 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.886093 4693 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.886102 4693 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.886110 4693 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.886119 4693 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.886128 4693 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.886138 4693 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/485d0f9e-03e0-4aa3-873a-fc9996fb5351-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 12 16:40:24 crc kubenswrapper[4693]: I1212 16:40:24.886145 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffrzz\" (UniqueName: \"kubernetes.io/projected/485d0f9e-03e0-4aa3-873a-fc9996fb5351-kube-api-access-ffrzz\") on node \"crc\" DevicePath \"\"" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.269384 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" event={"ID":"485d0f9e-03e0-4aa3-873a-fc9996fb5351","Type":"ContainerDied","Data":"35ddd37f10d1f51fd2b0d991681273dcda4e1e7efcb2237b5dcd01677b29c229"} Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.270016 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35ddd37f10d1f51fd2b0d991681273dcda4e1e7efcb2237b5dcd01677b29c229" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.269509 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tnrdz" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.394764 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k"] Dec 12 16:40:25 crc kubenswrapper[4693]: E1212 16:40:25.395417 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485d0f9e-03e0-4aa3-873a-fc9996fb5351" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.395438 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="485d0f9e-03e0-4aa3-873a-fc9996fb5351" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.395792 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="485d0f9e-03e0-4aa3-873a-fc9996fb5351" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.396799 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.399807 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.399883 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlgf7" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.399927 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.400529 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.405094 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.422886 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k"] Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.501288 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-77w6k\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.501343 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-77w6k\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.501553 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfgvh\" (UniqueName: \"kubernetes.io/projected/febe4770-6951-4b71-89ca-397954381302-kube-api-access-xfgvh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-77w6k\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.501649 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-77w6k\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.501885 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-77w6k\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.501949 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-77w6k\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.501979 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-77w6k\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.605307 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-77w6k\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.605383 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-77w6k\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.605414 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-77w6k\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.605477 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-77w6k\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.605528 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-77w6k\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.605637 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfgvh\" (UniqueName: \"kubernetes.io/projected/febe4770-6951-4b71-89ca-397954381302-kube-api-access-xfgvh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-77w6k\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.605723 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-77w6k\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.610026 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-77w6k\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.610147 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-77w6k\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.610320 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-77w6k\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.610901 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-77w6k\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.614721 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-77w6k\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.622703 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfgvh\" (UniqueName: \"kubernetes.io/projected/febe4770-6951-4b71-89ca-397954381302-kube-api-access-xfgvh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-77w6k\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.632172 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-77w6k\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:25 crc kubenswrapper[4693]: I1212 16:40:25.725987 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:40:26 crc kubenswrapper[4693]: W1212 16:40:26.373186 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfebe4770_6951_4b71_89ca_397954381302.slice/crio-59626f05ed0105a705773cd58bfdebc21a29fef580c5fae3c7f9eb9200291c85 WatchSource:0}: Error finding container 59626f05ed0105a705773cd58bfdebc21a29fef580c5fae3c7f9eb9200291c85: Status 404 returned error can't find the container with id 59626f05ed0105a705773cd58bfdebc21a29fef580c5fae3c7f9eb9200291c85 Dec 12 16:40:26 crc kubenswrapper[4693]: I1212 16:40:26.374336 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k"] Dec 12 16:40:27 crc kubenswrapper[4693]: I1212 16:40:27.298970 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" event={"ID":"febe4770-6951-4b71-89ca-397954381302","Type":"ContainerStarted","Data":"59626f05ed0105a705773cd58bfdebc21a29fef580c5fae3c7f9eb9200291c85"} Dec 12 16:40:28 crc kubenswrapper[4693]: I1212 16:40:28.313644 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" event={"ID":"febe4770-6951-4b71-89ca-397954381302","Type":"ContainerStarted","Data":"4fe019b3d4e43337b2d5b306c51101a497029fe907fa8de4b2ab293cb4766b97"} Dec 12 16:40:28 crc kubenswrapper[4693]: I1212 16:40:28.348134 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" podStartSLOduration=2.616142027 podStartE2EDuration="3.348115817s" podCreationTimestamp="2025-12-12 16:40:25 +0000 UTC" firstStartedPulling="2025-12-12 16:40:26.37603948 +0000 UTC m=+3253.544679081" lastFinishedPulling="2025-12-12 16:40:27.10801327 +0000 UTC m=+3254.276652871" observedRunningTime="2025-12-12 16:40:28.339591851 +0000 UTC m=+3255.508231452" watchObservedRunningTime="2025-12-12 16:40:28.348115817 +0000 UTC m=+3255.516755428" Dec 12 16:40:35 crc kubenswrapper[4693]: I1212 16:40:35.357713 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:40:35 crc kubenswrapper[4693]: E1212 16:40:35.358824 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:40:50 crc kubenswrapper[4693]: I1212 16:40:50.357378 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:40:50 crc kubenswrapper[4693]: E1212 16:40:50.358188 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:41:02 crc kubenswrapper[4693]: I1212 16:41:02.358048 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:41:02 crc kubenswrapper[4693]: E1212 16:41:02.360229 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:41:16 crc kubenswrapper[4693]: I1212 16:41:16.359665 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:41:16 crc kubenswrapper[4693]: I1212 16:41:16.926537 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerStarted","Data":"e62673de23f46e5e24bbb6ce63f6df95e4e0371a449f0b6d43f70818c45017fc"} Dec 12 16:41:26 crc kubenswrapper[4693]: I1212 16:41:26.853583 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ql9xh"] Dec 12 16:41:26 crc kubenswrapper[4693]: I1212 16:41:26.858354 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ql9xh" Dec 12 16:41:26 crc kubenswrapper[4693]: I1212 16:41:26.872215 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ql9xh"] Dec 12 16:41:27 crc kubenswrapper[4693]: I1212 16:41:27.003378 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/052de4fb-4c76-482f-aed9-c860d007f38a-utilities\") pod \"redhat-operators-ql9xh\" (UID: \"052de4fb-4c76-482f-aed9-c860d007f38a\") " pod="openshift-marketplace/redhat-operators-ql9xh" Dec 12 16:41:27 crc kubenswrapper[4693]: I1212 16:41:27.003774 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqrcw\" (UniqueName: \"kubernetes.io/projected/052de4fb-4c76-482f-aed9-c860d007f38a-kube-api-access-dqrcw\") pod \"redhat-operators-ql9xh\" (UID: \"052de4fb-4c76-482f-aed9-c860d007f38a\") " pod="openshift-marketplace/redhat-operators-ql9xh" Dec 12 16:41:27 crc kubenswrapper[4693]: I1212 16:41:27.003806 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/052de4fb-4c76-482f-aed9-c860d007f38a-catalog-content\") pod \"redhat-operators-ql9xh\" (UID: \"052de4fb-4c76-482f-aed9-c860d007f38a\") " pod="openshift-marketplace/redhat-operators-ql9xh" Dec 12 16:41:27 crc kubenswrapper[4693]: I1212 16:41:27.106362 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/052de4fb-4c76-482f-aed9-c860d007f38a-utilities\") pod \"redhat-operators-ql9xh\" (UID: \"052de4fb-4c76-482f-aed9-c860d007f38a\") " pod="openshift-marketplace/redhat-operators-ql9xh" Dec 12 16:41:27 crc kubenswrapper[4693]: I1212 16:41:27.106412 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqrcw\" (UniqueName: \"kubernetes.io/projected/052de4fb-4c76-482f-aed9-c860d007f38a-kube-api-access-dqrcw\") pod \"redhat-operators-ql9xh\" (UID: \"052de4fb-4c76-482f-aed9-c860d007f38a\") " pod="openshift-marketplace/redhat-operators-ql9xh" Dec 12 16:41:27 crc kubenswrapper[4693]: I1212 16:41:27.106435 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/052de4fb-4c76-482f-aed9-c860d007f38a-catalog-content\") pod \"redhat-operators-ql9xh\" (UID: \"052de4fb-4c76-482f-aed9-c860d007f38a\") " pod="openshift-marketplace/redhat-operators-ql9xh" Dec 12 16:41:27 crc kubenswrapper[4693]: I1212 16:41:27.107123 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/052de4fb-4c76-482f-aed9-c860d007f38a-catalog-content\") pod \"redhat-operators-ql9xh\" (UID: \"052de4fb-4c76-482f-aed9-c860d007f38a\") " pod="openshift-marketplace/redhat-operators-ql9xh" Dec 12 16:41:27 crc kubenswrapper[4693]: I1212 16:41:27.107124 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/052de4fb-4c76-482f-aed9-c860d007f38a-utilities\") pod \"redhat-operators-ql9xh\" (UID: \"052de4fb-4c76-482f-aed9-c860d007f38a\") " pod="openshift-marketplace/redhat-operators-ql9xh" Dec 12 16:41:27 crc kubenswrapper[4693]: I1212 16:41:27.124849 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqrcw\" (UniqueName: \"kubernetes.io/projected/052de4fb-4c76-482f-aed9-c860d007f38a-kube-api-access-dqrcw\") pod \"redhat-operators-ql9xh\" (UID: \"052de4fb-4c76-482f-aed9-c860d007f38a\") " pod="openshift-marketplace/redhat-operators-ql9xh" Dec 12 16:41:27 crc kubenswrapper[4693]: I1212 16:41:27.206101 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ql9xh" Dec 12 16:41:27 crc kubenswrapper[4693]: I1212 16:41:27.737748 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ql9xh"] Dec 12 16:41:27 crc kubenswrapper[4693]: W1212 16:41:27.738144 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod052de4fb_4c76_482f_aed9_c860d007f38a.slice/crio-a7e41e6fca641d3a270a6604d717a06ec1df4dca667f115035ae3b0eda6f4d8e WatchSource:0}: Error finding container a7e41e6fca641d3a270a6604d717a06ec1df4dca667f115035ae3b0eda6f4d8e: Status 404 returned error can't find the container with id a7e41e6fca641d3a270a6604d717a06ec1df4dca667f115035ae3b0eda6f4d8e Dec 12 16:41:28 crc kubenswrapper[4693]: I1212 16:41:28.063355 4693 generic.go:334] "Generic (PLEG): container finished" podID="052de4fb-4c76-482f-aed9-c860d007f38a" containerID="d81dafc3ec75066f6e14ab9c93bcecc13a0bf2e3673be4daac65705b4c71e707" exitCode=0 Dec 12 16:41:28 crc kubenswrapper[4693]: I1212 16:41:28.063405 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ql9xh" event={"ID":"052de4fb-4c76-482f-aed9-c860d007f38a","Type":"ContainerDied","Data":"d81dafc3ec75066f6e14ab9c93bcecc13a0bf2e3673be4daac65705b4c71e707"} Dec 12 16:41:28 crc kubenswrapper[4693]: I1212 16:41:28.063437 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ql9xh" event={"ID":"052de4fb-4c76-482f-aed9-c860d007f38a","Type":"ContainerStarted","Data":"a7e41e6fca641d3a270a6604d717a06ec1df4dca667f115035ae3b0eda6f4d8e"} Dec 12 16:41:30 crc kubenswrapper[4693]: I1212 16:41:30.087725 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ql9xh" event={"ID":"052de4fb-4c76-482f-aed9-c860d007f38a","Type":"ContainerStarted","Data":"5b624d6b26419e1b2d4b6205a5b67c308e898c3d051775ac0f3533978f9b1089"} Dec 12 16:41:35 crc kubenswrapper[4693]: I1212 16:41:35.160454 4693 generic.go:334] "Generic (PLEG): container finished" podID="052de4fb-4c76-482f-aed9-c860d007f38a" containerID="5b624d6b26419e1b2d4b6205a5b67c308e898c3d051775ac0f3533978f9b1089" exitCode=0 Dec 12 16:41:35 crc kubenswrapper[4693]: I1212 16:41:35.161111 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ql9xh" event={"ID":"052de4fb-4c76-482f-aed9-c860d007f38a","Type":"ContainerDied","Data":"5b624d6b26419e1b2d4b6205a5b67c308e898c3d051775ac0f3533978f9b1089"} Dec 12 16:41:36 crc kubenswrapper[4693]: I1212 16:41:36.179753 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ql9xh" event={"ID":"052de4fb-4c76-482f-aed9-c860d007f38a","Type":"ContainerStarted","Data":"064c840dd841c9a56ebf22dab101d460e87a9ee213f86bf97505cbf622bcfa25"} Dec 12 16:41:36 crc kubenswrapper[4693]: I1212 16:41:36.209142 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ql9xh" podStartSLOduration=2.60342175 podStartE2EDuration="10.209121488s" podCreationTimestamp="2025-12-12 16:41:26 +0000 UTC" firstStartedPulling="2025-12-12 16:41:28.067399775 +0000 UTC m=+3315.236039376" lastFinishedPulling="2025-12-12 16:41:35.673099503 +0000 UTC m=+3322.841739114" observedRunningTime="2025-12-12 16:41:36.204000862 +0000 UTC m=+3323.372640473" watchObservedRunningTime="2025-12-12 16:41:36.209121488 +0000 UTC m=+3323.377761099" Dec 12 16:41:37 crc kubenswrapper[4693]: I1212 16:41:37.207511 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ql9xh" Dec 12 16:41:37 crc kubenswrapper[4693]: I1212 16:41:37.208050 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ql9xh" Dec 12 16:41:38 crc kubenswrapper[4693]: I1212 16:41:38.271732 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ql9xh" podUID="052de4fb-4c76-482f-aed9-c860d007f38a" containerName="registry-server" probeResult="failure" output=< Dec 12 16:41:38 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 16:41:38 crc kubenswrapper[4693]: > Dec 12 16:41:47 crc kubenswrapper[4693]: I1212 16:41:47.706605 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ql9xh" Dec 12 16:41:47 crc kubenswrapper[4693]: I1212 16:41:47.775619 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ql9xh" Dec 12 16:41:47 crc kubenswrapper[4693]: I1212 16:41:47.954569 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ql9xh"] Dec 12 16:41:49 crc kubenswrapper[4693]: I1212 16:41:49.330705 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ql9xh" podUID="052de4fb-4c76-482f-aed9-c860d007f38a" containerName="registry-server" containerID="cri-o://064c840dd841c9a56ebf22dab101d460e87a9ee213f86bf97505cbf622bcfa25" gracePeriod=2 Dec 12 16:41:49 crc kubenswrapper[4693]: I1212 16:41:49.889837 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ql9xh" Dec 12 16:41:49 crc kubenswrapper[4693]: I1212 16:41:49.942006 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqrcw\" (UniqueName: \"kubernetes.io/projected/052de4fb-4c76-482f-aed9-c860d007f38a-kube-api-access-dqrcw\") pod \"052de4fb-4c76-482f-aed9-c860d007f38a\" (UID: \"052de4fb-4c76-482f-aed9-c860d007f38a\") " Dec 12 16:41:49 crc kubenswrapper[4693]: I1212 16:41:49.942344 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/052de4fb-4c76-482f-aed9-c860d007f38a-catalog-content\") pod \"052de4fb-4c76-482f-aed9-c860d007f38a\" (UID: \"052de4fb-4c76-482f-aed9-c860d007f38a\") " Dec 12 16:41:49 crc kubenswrapper[4693]: I1212 16:41:49.942390 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/052de4fb-4c76-482f-aed9-c860d007f38a-utilities\") pod \"052de4fb-4c76-482f-aed9-c860d007f38a\" (UID: \"052de4fb-4c76-482f-aed9-c860d007f38a\") " Dec 12 16:41:49 crc kubenswrapper[4693]: I1212 16:41:49.943509 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/052de4fb-4c76-482f-aed9-c860d007f38a-utilities" (OuterVolumeSpecName: "utilities") pod "052de4fb-4c76-482f-aed9-c860d007f38a" (UID: "052de4fb-4c76-482f-aed9-c860d007f38a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:41:49 crc kubenswrapper[4693]: I1212 16:41:49.949262 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/052de4fb-4c76-482f-aed9-c860d007f38a-kube-api-access-dqrcw" (OuterVolumeSpecName: "kube-api-access-dqrcw") pod "052de4fb-4c76-482f-aed9-c860d007f38a" (UID: "052de4fb-4c76-482f-aed9-c860d007f38a"). InnerVolumeSpecName "kube-api-access-dqrcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:41:50 crc kubenswrapper[4693]: I1212 16:41:50.044943 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/052de4fb-4c76-482f-aed9-c860d007f38a-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:41:50 crc kubenswrapper[4693]: I1212 16:41:50.044978 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqrcw\" (UniqueName: \"kubernetes.io/projected/052de4fb-4c76-482f-aed9-c860d007f38a-kube-api-access-dqrcw\") on node \"crc\" DevicePath \"\"" Dec 12 16:41:50 crc kubenswrapper[4693]: I1212 16:41:50.085089 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/052de4fb-4c76-482f-aed9-c860d007f38a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "052de4fb-4c76-482f-aed9-c860d007f38a" (UID: "052de4fb-4c76-482f-aed9-c860d007f38a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:41:50 crc kubenswrapper[4693]: I1212 16:41:50.146939 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/052de4fb-4c76-482f-aed9-c860d007f38a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:41:50 crc kubenswrapper[4693]: I1212 16:41:50.345044 4693 generic.go:334] "Generic (PLEG): container finished" podID="052de4fb-4c76-482f-aed9-c860d007f38a" containerID="064c840dd841c9a56ebf22dab101d460e87a9ee213f86bf97505cbf622bcfa25" exitCode=0 Dec 12 16:41:50 crc kubenswrapper[4693]: I1212 16:41:50.345087 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ql9xh" event={"ID":"052de4fb-4c76-482f-aed9-c860d007f38a","Type":"ContainerDied","Data":"064c840dd841c9a56ebf22dab101d460e87a9ee213f86bf97505cbf622bcfa25"} Dec 12 16:41:50 crc kubenswrapper[4693]: I1212 16:41:50.345116 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ql9xh" event={"ID":"052de4fb-4c76-482f-aed9-c860d007f38a","Type":"ContainerDied","Data":"a7e41e6fca641d3a270a6604d717a06ec1df4dca667f115035ae3b0eda6f4d8e"} Dec 12 16:41:50 crc kubenswrapper[4693]: I1212 16:41:50.345125 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ql9xh" Dec 12 16:41:50 crc kubenswrapper[4693]: I1212 16:41:50.345134 4693 scope.go:117] "RemoveContainer" containerID="064c840dd841c9a56ebf22dab101d460e87a9ee213f86bf97505cbf622bcfa25" Dec 12 16:41:50 crc kubenswrapper[4693]: I1212 16:41:50.385856 4693 scope.go:117] "RemoveContainer" containerID="5b624d6b26419e1b2d4b6205a5b67c308e898c3d051775ac0f3533978f9b1089" Dec 12 16:41:50 crc kubenswrapper[4693]: I1212 16:41:50.400737 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ql9xh"] Dec 12 16:41:50 crc kubenswrapper[4693]: I1212 16:41:50.417221 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ql9xh"] Dec 12 16:41:50 crc kubenswrapper[4693]: I1212 16:41:50.437467 4693 scope.go:117] "RemoveContainer" containerID="d81dafc3ec75066f6e14ab9c93bcecc13a0bf2e3673be4daac65705b4c71e707" Dec 12 16:41:50 crc kubenswrapper[4693]: I1212 16:41:50.487829 4693 scope.go:117] "RemoveContainer" containerID="064c840dd841c9a56ebf22dab101d460e87a9ee213f86bf97505cbf622bcfa25" Dec 12 16:41:50 crc kubenswrapper[4693]: E1212 16:41:50.488813 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"064c840dd841c9a56ebf22dab101d460e87a9ee213f86bf97505cbf622bcfa25\": container with ID starting with 064c840dd841c9a56ebf22dab101d460e87a9ee213f86bf97505cbf622bcfa25 not found: ID does not exist" containerID="064c840dd841c9a56ebf22dab101d460e87a9ee213f86bf97505cbf622bcfa25" Dec 12 16:41:50 crc kubenswrapper[4693]: I1212 16:41:50.488942 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"064c840dd841c9a56ebf22dab101d460e87a9ee213f86bf97505cbf622bcfa25"} err="failed to get container status \"064c840dd841c9a56ebf22dab101d460e87a9ee213f86bf97505cbf622bcfa25\": rpc error: code = NotFound desc = could not find container \"064c840dd841c9a56ebf22dab101d460e87a9ee213f86bf97505cbf622bcfa25\": container with ID starting with 064c840dd841c9a56ebf22dab101d460e87a9ee213f86bf97505cbf622bcfa25 not found: ID does not exist" Dec 12 16:41:50 crc kubenswrapper[4693]: I1212 16:41:50.489043 4693 scope.go:117] "RemoveContainer" containerID="5b624d6b26419e1b2d4b6205a5b67c308e898c3d051775ac0f3533978f9b1089" Dec 12 16:41:50 crc kubenswrapper[4693]: E1212 16:41:50.489374 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b624d6b26419e1b2d4b6205a5b67c308e898c3d051775ac0f3533978f9b1089\": container with ID starting with 5b624d6b26419e1b2d4b6205a5b67c308e898c3d051775ac0f3533978f9b1089 not found: ID does not exist" containerID="5b624d6b26419e1b2d4b6205a5b67c308e898c3d051775ac0f3533978f9b1089" Dec 12 16:41:50 crc kubenswrapper[4693]: I1212 16:41:50.489477 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b624d6b26419e1b2d4b6205a5b67c308e898c3d051775ac0f3533978f9b1089"} err="failed to get container status \"5b624d6b26419e1b2d4b6205a5b67c308e898c3d051775ac0f3533978f9b1089\": rpc error: code = NotFound desc = could not find container \"5b624d6b26419e1b2d4b6205a5b67c308e898c3d051775ac0f3533978f9b1089\": container with ID starting with 5b624d6b26419e1b2d4b6205a5b67c308e898c3d051775ac0f3533978f9b1089 not found: ID does not exist" Dec 12 16:41:50 crc kubenswrapper[4693]: I1212 16:41:50.489592 4693 scope.go:117] "RemoveContainer" containerID="d81dafc3ec75066f6e14ab9c93bcecc13a0bf2e3673be4daac65705b4c71e707" Dec 12 16:41:50 crc kubenswrapper[4693]: E1212 16:41:50.489968 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d81dafc3ec75066f6e14ab9c93bcecc13a0bf2e3673be4daac65705b4c71e707\": container with ID starting with d81dafc3ec75066f6e14ab9c93bcecc13a0bf2e3673be4daac65705b4c71e707 not found: ID does not exist" containerID="d81dafc3ec75066f6e14ab9c93bcecc13a0bf2e3673be4daac65705b4c71e707" Dec 12 16:41:50 crc kubenswrapper[4693]: I1212 16:41:50.490084 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81dafc3ec75066f6e14ab9c93bcecc13a0bf2e3673be4daac65705b4c71e707"} err="failed to get container status \"d81dafc3ec75066f6e14ab9c93bcecc13a0bf2e3673be4daac65705b4c71e707\": rpc error: code = NotFound desc = could not find container \"d81dafc3ec75066f6e14ab9c93bcecc13a0bf2e3673be4daac65705b4c71e707\": container with ID starting with d81dafc3ec75066f6e14ab9c93bcecc13a0bf2e3673be4daac65705b4c71e707 not found: ID does not exist" Dec 12 16:41:51 crc kubenswrapper[4693]: I1212 16:41:51.377211 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="052de4fb-4c76-482f-aed9-c860d007f38a" path="/var/lib/kubelet/pods/052de4fb-4c76-482f-aed9-c860d007f38a/volumes" Dec 12 16:42:20 crc kubenswrapper[4693]: I1212 16:42:20.000656 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gtvgd"] Dec 12 16:42:20 crc kubenswrapper[4693]: E1212 16:42:20.001698 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052de4fb-4c76-482f-aed9-c860d007f38a" containerName="extract-content" Dec 12 16:42:20 crc kubenswrapper[4693]: I1212 16:42:20.001739 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="052de4fb-4c76-482f-aed9-c860d007f38a" containerName="extract-content" Dec 12 16:42:20 crc kubenswrapper[4693]: E1212 16:42:20.001754 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052de4fb-4c76-482f-aed9-c860d007f38a" containerName="extract-utilities" Dec 12 16:42:20 crc kubenswrapper[4693]: I1212 16:42:20.001760 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="052de4fb-4c76-482f-aed9-c860d007f38a" containerName="extract-utilities" Dec 12 16:42:20 crc kubenswrapper[4693]: E1212 16:42:20.001815 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052de4fb-4c76-482f-aed9-c860d007f38a" containerName="registry-server" Dec 12 16:42:20 crc kubenswrapper[4693]: I1212 16:42:20.001821 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="052de4fb-4c76-482f-aed9-c860d007f38a" containerName="registry-server" Dec 12 16:42:20 crc kubenswrapper[4693]: I1212 16:42:20.002051 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="052de4fb-4c76-482f-aed9-c860d007f38a" containerName="registry-server" Dec 12 16:42:20 crc kubenswrapper[4693]: I1212 16:42:20.004595 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gtvgd" Dec 12 16:42:20 crc kubenswrapper[4693]: I1212 16:42:20.017497 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gtvgd"] Dec 12 16:42:20 crc kubenswrapper[4693]: I1212 16:42:20.112128 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/988fda3e-4d80-4caf-9a15-488cb7f25768-catalog-content\") pod \"redhat-marketplace-gtvgd\" (UID: \"988fda3e-4d80-4caf-9a15-488cb7f25768\") " pod="openshift-marketplace/redhat-marketplace-gtvgd" Dec 12 16:42:20 crc kubenswrapper[4693]: I1212 16:42:20.112601 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/988fda3e-4d80-4caf-9a15-488cb7f25768-utilities\") pod \"redhat-marketplace-gtvgd\" (UID: \"988fda3e-4d80-4caf-9a15-488cb7f25768\") " pod="openshift-marketplace/redhat-marketplace-gtvgd" Dec 12 16:42:20 crc kubenswrapper[4693]: I1212 16:42:20.112763 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ndvl\" (UniqueName: \"kubernetes.io/projected/988fda3e-4d80-4caf-9a15-488cb7f25768-kube-api-access-2ndvl\") pod \"redhat-marketplace-gtvgd\" (UID: \"988fda3e-4d80-4caf-9a15-488cb7f25768\") " pod="openshift-marketplace/redhat-marketplace-gtvgd" Dec 12 16:42:20 crc kubenswrapper[4693]: I1212 16:42:20.215172 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ndvl\" (UniqueName: \"kubernetes.io/projected/988fda3e-4d80-4caf-9a15-488cb7f25768-kube-api-access-2ndvl\") pod \"redhat-marketplace-gtvgd\" (UID: \"988fda3e-4d80-4caf-9a15-488cb7f25768\") " pod="openshift-marketplace/redhat-marketplace-gtvgd" Dec 12 16:42:20 crc kubenswrapper[4693]: I1212 16:42:20.215369 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/988fda3e-4d80-4caf-9a15-488cb7f25768-catalog-content\") pod \"redhat-marketplace-gtvgd\" (UID: \"988fda3e-4d80-4caf-9a15-488cb7f25768\") " pod="openshift-marketplace/redhat-marketplace-gtvgd" Dec 12 16:42:20 crc kubenswrapper[4693]: I1212 16:42:20.215517 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/988fda3e-4d80-4caf-9a15-488cb7f25768-utilities\") pod \"redhat-marketplace-gtvgd\" (UID: \"988fda3e-4d80-4caf-9a15-488cb7f25768\") " pod="openshift-marketplace/redhat-marketplace-gtvgd" Dec 12 16:42:20 crc kubenswrapper[4693]: I1212 16:42:20.215950 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/988fda3e-4d80-4caf-9a15-488cb7f25768-catalog-content\") pod \"redhat-marketplace-gtvgd\" (UID: \"988fda3e-4d80-4caf-9a15-488cb7f25768\") " pod="openshift-marketplace/redhat-marketplace-gtvgd" Dec 12 16:42:20 crc kubenswrapper[4693]: I1212 16:42:20.216035 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/988fda3e-4d80-4caf-9a15-488cb7f25768-utilities\") pod \"redhat-marketplace-gtvgd\" (UID: \"988fda3e-4d80-4caf-9a15-488cb7f25768\") " pod="openshift-marketplace/redhat-marketplace-gtvgd" Dec 12 16:42:20 crc kubenswrapper[4693]: I1212 16:42:20.240433 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ndvl\" (UniqueName: \"kubernetes.io/projected/988fda3e-4d80-4caf-9a15-488cb7f25768-kube-api-access-2ndvl\") pod \"redhat-marketplace-gtvgd\" (UID: \"988fda3e-4d80-4caf-9a15-488cb7f25768\") " pod="openshift-marketplace/redhat-marketplace-gtvgd" Dec 12 16:42:20 crc kubenswrapper[4693]: I1212 16:42:20.335519 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gtvgd" Dec 12 16:42:20 crc kubenswrapper[4693]: I1212 16:42:20.838864 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gtvgd"] Dec 12 16:42:21 crc kubenswrapper[4693]: I1212 16:42:21.799462 4693 generic.go:334] "Generic (PLEG): container finished" podID="988fda3e-4d80-4caf-9a15-488cb7f25768" containerID="1b5c95cdf1e326cb7fce790f75ac2478bfdbc3608487ae77ec8143f2debf24f3" exitCode=0 Dec 12 16:42:21 crc kubenswrapper[4693]: I1212 16:42:21.799791 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gtvgd" event={"ID":"988fda3e-4d80-4caf-9a15-488cb7f25768","Type":"ContainerDied","Data":"1b5c95cdf1e326cb7fce790f75ac2478bfdbc3608487ae77ec8143f2debf24f3"} Dec 12 16:42:21 crc kubenswrapper[4693]: I1212 16:42:21.799833 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gtvgd" event={"ID":"988fda3e-4d80-4caf-9a15-488cb7f25768","Type":"ContainerStarted","Data":"ae793aa2ba708c83ced097c473080f3d3184fae59178914ed8e45a829935288d"} Dec 12 16:42:21 crc kubenswrapper[4693]: I1212 16:42:21.803384 4693 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 16:42:22 crc kubenswrapper[4693]: I1212 16:42:22.814199 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gtvgd" event={"ID":"988fda3e-4d80-4caf-9a15-488cb7f25768","Type":"ContainerStarted","Data":"e210077ae406216c259ad09eb3edd10f0579f0b6e8e6829dfa423aee8d5487bc"} Dec 12 16:42:23 crc kubenswrapper[4693]: I1212 16:42:23.827936 4693 generic.go:334] "Generic (PLEG): container finished" podID="988fda3e-4d80-4caf-9a15-488cb7f25768" containerID="e210077ae406216c259ad09eb3edd10f0579f0b6e8e6829dfa423aee8d5487bc" exitCode=0 Dec 12 16:42:23 crc kubenswrapper[4693]: I1212 16:42:23.828042 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gtvgd" event={"ID":"988fda3e-4d80-4caf-9a15-488cb7f25768","Type":"ContainerDied","Data":"e210077ae406216c259ad09eb3edd10f0579f0b6e8e6829dfa423aee8d5487bc"} Dec 12 16:42:24 crc kubenswrapper[4693]: I1212 16:42:24.852041 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gtvgd" event={"ID":"988fda3e-4d80-4caf-9a15-488cb7f25768","Type":"ContainerStarted","Data":"16842bc9257bd0b4b401495f91480b84278c00d33e508facf9d0bb047e823b4c"} Dec 12 16:42:24 crc kubenswrapper[4693]: I1212 16:42:24.876338 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gtvgd" podStartSLOduration=3.425976595 podStartE2EDuration="5.876319667s" podCreationTimestamp="2025-12-12 16:42:19 +0000 UTC" firstStartedPulling="2025-12-12 16:42:21.803019769 +0000 UTC m=+3368.971659380" lastFinishedPulling="2025-12-12 16:42:24.253362841 +0000 UTC m=+3371.422002452" observedRunningTime="2025-12-12 16:42:24.865697455 +0000 UTC m=+3372.034337066" watchObservedRunningTime="2025-12-12 16:42:24.876319667 +0000 UTC m=+3372.044959268" Dec 12 16:42:30 crc kubenswrapper[4693]: I1212 16:42:30.336374 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gtvgd" Dec 12 16:42:30 crc kubenswrapper[4693]: I1212 16:42:30.336841 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gtvgd" Dec 12 16:42:30 crc kubenswrapper[4693]: I1212 16:42:30.405050 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gtvgd" Dec 12 16:42:30 crc kubenswrapper[4693]: I1212 16:42:30.989110 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gtvgd" Dec 12 16:42:31 crc kubenswrapper[4693]: I1212 16:42:31.045173 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gtvgd"] Dec 12 16:42:32 crc kubenswrapper[4693]: I1212 16:42:32.952366 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gtvgd" podUID="988fda3e-4d80-4caf-9a15-488cb7f25768" containerName="registry-server" containerID="cri-o://16842bc9257bd0b4b401495f91480b84278c00d33e508facf9d0bb047e823b4c" gracePeriod=2 Dec 12 16:42:33 crc kubenswrapper[4693]: I1212 16:42:33.515793 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gtvgd" Dec 12 16:42:33 crc kubenswrapper[4693]: I1212 16:42:33.656953 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/988fda3e-4d80-4caf-9a15-488cb7f25768-catalog-content\") pod \"988fda3e-4d80-4caf-9a15-488cb7f25768\" (UID: \"988fda3e-4d80-4caf-9a15-488cb7f25768\") " Dec 12 16:42:33 crc kubenswrapper[4693]: I1212 16:42:33.657183 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/988fda3e-4d80-4caf-9a15-488cb7f25768-utilities\") pod \"988fda3e-4d80-4caf-9a15-488cb7f25768\" (UID: \"988fda3e-4d80-4caf-9a15-488cb7f25768\") " Dec 12 16:42:33 crc kubenswrapper[4693]: I1212 16:42:33.657289 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ndvl\" (UniqueName: \"kubernetes.io/projected/988fda3e-4d80-4caf-9a15-488cb7f25768-kube-api-access-2ndvl\") pod \"988fda3e-4d80-4caf-9a15-488cb7f25768\" (UID: \"988fda3e-4d80-4caf-9a15-488cb7f25768\") " Dec 12 16:42:33 crc kubenswrapper[4693]: I1212 16:42:33.658953 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/988fda3e-4d80-4caf-9a15-488cb7f25768-utilities" (OuterVolumeSpecName: "utilities") pod "988fda3e-4d80-4caf-9a15-488cb7f25768" (UID: "988fda3e-4d80-4caf-9a15-488cb7f25768"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:42:33 crc kubenswrapper[4693]: I1212 16:42:33.664285 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/988fda3e-4d80-4caf-9a15-488cb7f25768-kube-api-access-2ndvl" (OuterVolumeSpecName: "kube-api-access-2ndvl") pod "988fda3e-4d80-4caf-9a15-488cb7f25768" (UID: "988fda3e-4d80-4caf-9a15-488cb7f25768"). InnerVolumeSpecName "kube-api-access-2ndvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:42:33 crc kubenswrapper[4693]: I1212 16:42:33.675089 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/988fda3e-4d80-4caf-9a15-488cb7f25768-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "988fda3e-4d80-4caf-9a15-488cb7f25768" (UID: "988fda3e-4d80-4caf-9a15-488cb7f25768"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:42:33 crc kubenswrapper[4693]: I1212 16:42:33.759723 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/988fda3e-4d80-4caf-9a15-488cb7f25768-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:42:33 crc kubenswrapper[4693]: I1212 16:42:33.760053 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/988fda3e-4d80-4caf-9a15-488cb7f25768-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:42:33 crc kubenswrapper[4693]: I1212 16:42:33.760064 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ndvl\" (UniqueName: \"kubernetes.io/projected/988fda3e-4d80-4caf-9a15-488cb7f25768-kube-api-access-2ndvl\") on node \"crc\" DevicePath \"\"" Dec 12 16:42:33 crc kubenswrapper[4693]: I1212 16:42:33.971236 4693 generic.go:334] "Generic (PLEG): container finished" podID="988fda3e-4d80-4caf-9a15-488cb7f25768" containerID="16842bc9257bd0b4b401495f91480b84278c00d33e508facf9d0bb047e823b4c" exitCode=0 Dec 12 16:42:33 crc kubenswrapper[4693]: I1212 16:42:33.971308 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gtvgd" event={"ID":"988fda3e-4d80-4caf-9a15-488cb7f25768","Type":"ContainerDied","Data":"16842bc9257bd0b4b401495f91480b84278c00d33e508facf9d0bb047e823b4c"} Dec 12 16:42:33 crc kubenswrapper[4693]: I1212 16:42:33.971344 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gtvgd" event={"ID":"988fda3e-4d80-4caf-9a15-488cb7f25768","Type":"ContainerDied","Data":"ae793aa2ba708c83ced097c473080f3d3184fae59178914ed8e45a829935288d"} Dec 12 16:42:33 crc kubenswrapper[4693]: I1212 16:42:33.971380 4693 scope.go:117] "RemoveContainer" containerID="16842bc9257bd0b4b401495f91480b84278c00d33e508facf9d0bb047e823b4c" Dec 12 16:42:33 crc kubenswrapper[4693]: I1212 16:42:33.971451 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gtvgd" Dec 12 16:42:34 crc kubenswrapper[4693]: I1212 16:42:34.013171 4693 scope.go:117] "RemoveContainer" containerID="e210077ae406216c259ad09eb3edd10f0579f0b6e8e6829dfa423aee8d5487bc" Dec 12 16:42:34 crc kubenswrapper[4693]: I1212 16:42:34.062079 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gtvgd"] Dec 12 16:42:34 crc kubenswrapper[4693]: I1212 16:42:34.065475 4693 scope.go:117] "RemoveContainer" containerID="1b5c95cdf1e326cb7fce790f75ac2478bfdbc3608487ae77ec8143f2debf24f3" Dec 12 16:42:34 crc kubenswrapper[4693]: I1212 16:42:34.075934 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gtvgd"] Dec 12 16:42:34 crc kubenswrapper[4693]: I1212 16:42:34.127597 4693 scope.go:117] "RemoveContainer" containerID="16842bc9257bd0b4b401495f91480b84278c00d33e508facf9d0bb047e823b4c" Dec 12 16:42:34 crc kubenswrapper[4693]: E1212 16:42:34.128176 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16842bc9257bd0b4b401495f91480b84278c00d33e508facf9d0bb047e823b4c\": container with ID starting with 16842bc9257bd0b4b401495f91480b84278c00d33e508facf9d0bb047e823b4c not found: ID does not exist" containerID="16842bc9257bd0b4b401495f91480b84278c00d33e508facf9d0bb047e823b4c" Dec 12 16:42:34 crc kubenswrapper[4693]: I1212 16:42:34.128248 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16842bc9257bd0b4b401495f91480b84278c00d33e508facf9d0bb047e823b4c"} err="failed to get container status \"16842bc9257bd0b4b401495f91480b84278c00d33e508facf9d0bb047e823b4c\": rpc error: code = NotFound desc = could not find container \"16842bc9257bd0b4b401495f91480b84278c00d33e508facf9d0bb047e823b4c\": container with ID starting with 16842bc9257bd0b4b401495f91480b84278c00d33e508facf9d0bb047e823b4c not found: ID does not exist" Dec 12 16:42:34 crc kubenswrapper[4693]: I1212 16:42:34.128292 4693 scope.go:117] "RemoveContainer" containerID="e210077ae406216c259ad09eb3edd10f0579f0b6e8e6829dfa423aee8d5487bc" Dec 12 16:42:34 crc kubenswrapper[4693]: E1212 16:42:34.128549 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e210077ae406216c259ad09eb3edd10f0579f0b6e8e6829dfa423aee8d5487bc\": container with ID starting with e210077ae406216c259ad09eb3edd10f0579f0b6e8e6829dfa423aee8d5487bc not found: ID does not exist" containerID="e210077ae406216c259ad09eb3edd10f0579f0b6e8e6829dfa423aee8d5487bc" Dec 12 16:42:34 crc kubenswrapper[4693]: I1212 16:42:34.128579 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e210077ae406216c259ad09eb3edd10f0579f0b6e8e6829dfa423aee8d5487bc"} err="failed to get container status \"e210077ae406216c259ad09eb3edd10f0579f0b6e8e6829dfa423aee8d5487bc\": rpc error: code = NotFound desc = could not find container \"e210077ae406216c259ad09eb3edd10f0579f0b6e8e6829dfa423aee8d5487bc\": container with ID starting with e210077ae406216c259ad09eb3edd10f0579f0b6e8e6829dfa423aee8d5487bc not found: ID does not exist" Dec 12 16:42:34 crc kubenswrapper[4693]: I1212 16:42:34.128597 4693 scope.go:117] "RemoveContainer" containerID="1b5c95cdf1e326cb7fce790f75ac2478bfdbc3608487ae77ec8143f2debf24f3" Dec 12 16:42:34 crc kubenswrapper[4693]: E1212 16:42:34.129718 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b5c95cdf1e326cb7fce790f75ac2478bfdbc3608487ae77ec8143f2debf24f3\": container with ID starting with 1b5c95cdf1e326cb7fce790f75ac2478bfdbc3608487ae77ec8143f2debf24f3 not found: ID does not exist" containerID="1b5c95cdf1e326cb7fce790f75ac2478bfdbc3608487ae77ec8143f2debf24f3" Dec 12 16:42:34 crc kubenswrapper[4693]: I1212 16:42:34.129744 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b5c95cdf1e326cb7fce790f75ac2478bfdbc3608487ae77ec8143f2debf24f3"} err="failed to get container status \"1b5c95cdf1e326cb7fce790f75ac2478bfdbc3608487ae77ec8143f2debf24f3\": rpc error: code = NotFound desc = could not find container \"1b5c95cdf1e326cb7fce790f75ac2478bfdbc3608487ae77ec8143f2debf24f3\": container with ID starting with 1b5c95cdf1e326cb7fce790f75ac2478bfdbc3608487ae77ec8143f2debf24f3 not found: ID does not exist" Dec 12 16:42:35 crc kubenswrapper[4693]: I1212 16:42:35.370828 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="988fda3e-4d80-4caf-9a15-488cb7f25768" path="/var/lib/kubelet/pods/988fda3e-4d80-4caf-9a15-488cb7f25768/volumes" Dec 12 16:42:58 crc kubenswrapper[4693]: I1212 16:42:58.266531 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qm58d"] Dec 12 16:42:58 crc kubenswrapper[4693]: E1212 16:42:58.267894 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988fda3e-4d80-4caf-9a15-488cb7f25768" containerName="extract-content" Dec 12 16:42:58 crc kubenswrapper[4693]: I1212 16:42:58.267914 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="988fda3e-4d80-4caf-9a15-488cb7f25768" containerName="extract-content" Dec 12 16:42:58 crc kubenswrapper[4693]: E1212 16:42:58.267946 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988fda3e-4d80-4caf-9a15-488cb7f25768" containerName="extract-utilities" Dec 12 16:42:58 crc kubenswrapper[4693]: I1212 16:42:58.267957 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="988fda3e-4d80-4caf-9a15-488cb7f25768" containerName="extract-utilities" Dec 12 16:42:58 crc kubenswrapper[4693]: E1212 16:42:58.268003 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988fda3e-4d80-4caf-9a15-488cb7f25768" containerName="registry-server" Dec 12 16:42:58 crc kubenswrapper[4693]: I1212 16:42:58.268012 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="988fda3e-4d80-4caf-9a15-488cb7f25768" containerName="registry-server" Dec 12 16:42:58 crc kubenswrapper[4693]: I1212 16:42:58.268349 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="988fda3e-4d80-4caf-9a15-488cb7f25768" containerName="registry-server" Dec 12 16:42:58 crc kubenswrapper[4693]: I1212 16:42:58.270727 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qm58d" Dec 12 16:42:58 crc kubenswrapper[4693]: I1212 16:42:58.280619 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qm58d"] Dec 12 16:42:58 crc kubenswrapper[4693]: I1212 16:42:58.408367 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55e34e94-0758-4ae5-8f18-34b9baac75be-utilities\") pod \"community-operators-qm58d\" (UID: \"55e34e94-0758-4ae5-8f18-34b9baac75be\") " pod="openshift-marketplace/community-operators-qm58d" Dec 12 16:42:58 crc kubenswrapper[4693]: I1212 16:42:58.408727 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tvdn\" (UniqueName: \"kubernetes.io/projected/55e34e94-0758-4ae5-8f18-34b9baac75be-kube-api-access-4tvdn\") pod \"community-operators-qm58d\" (UID: \"55e34e94-0758-4ae5-8f18-34b9baac75be\") " pod="openshift-marketplace/community-operators-qm58d" Dec 12 16:42:58 crc kubenswrapper[4693]: I1212 16:42:58.409047 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55e34e94-0758-4ae5-8f18-34b9baac75be-catalog-content\") pod \"community-operators-qm58d\" (UID: \"55e34e94-0758-4ae5-8f18-34b9baac75be\") " pod="openshift-marketplace/community-operators-qm58d" Dec 12 16:42:58 crc kubenswrapper[4693]: I1212 16:42:58.511552 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55e34e94-0758-4ae5-8f18-34b9baac75be-catalog-content\") pod \"community-operators-qm58d\" (UID: \"55e34e94-0758-4ae5-8f18-34b9baac75be\") " pod="openshift-marketplace/community-operators-qm58d" Dec 12 16:42:58 crc kubenswrapper[4693]: I1212 16:42:58.512073 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55e34e94-0758-4ae5-8f18-34b9baac75be-utilities\") pod \"community-operators-qm58d\" (UID: \"55e34e94-0758-4ae5-8f18-34b9baac75be\") " pod="openshift-marketplace/community-operators-qm58d" Dec 12 16:42:58 crc kubenswrapper[4693]: I1212 16:42:58.512136 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55e34e94-0758-4ae5-8f18-34b9baac75be-catalog-content\") pod \"community-operators-qm58d\" (UID: \"55e34e94-0758-4ae5-8f18-34b9baac75be\") " pod="openshift-marketplace/community-operators-qm58d" Dec 12 16:42:58 crc kubenswrapper[4693]: I1212 16:42:58.512222 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tvdn\" (UniqueName: \"kubernetes.io/projected/55e34e94-0758-4ae5-8f18-34b9baac75be-kube-api-access-4tvdn\") pod \"community-operators-qm58d\" (UID: \"55e34e94-0758-4ae5-8f18-34b9baac75be\") " pod="openshift-marketplace/community-operators-qm58d" Dec 12 16:42:58 crc kubenswrapper[4693]: I1212 16:42:58.512776 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55e34e94-0758-4ae5-8f18-34b9baac75be-utilities\") pod \"community-operators-qm58d\" (UID: \"55e34e94-0758-4ae5-8f18-34b9baac75be\") " pod="openshift-marketplace/community-operators-qm58d" Dec 12 16:42:58 crc kubenswrapper[4693]: I1212 16:42:58.541352 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tvdn\" (UniqueName: \"kubernetes.io/projected/55e34e94-0758-4ae5-8f18-34b9baac75be-kube-api-access-4tvdn\") pod \"community-operators-qm58d\" (UID: \"55e34e94-0758-4ae5-8f18-34b9baac75be\") " pod="openshift-marketplace/community-operators-qm58d" Dec 12 16:42:58 crc kubenswrapper[4693]: I1212 16:42:58.595656 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qm58d" Dec 12 16:42:59 crc kubenswrapper[4693]: I1212 16:42:59.208479 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qm58d"] Dec 12 16:42:59 crc kubenswrapper[4693]: I1212 16:42:59.261500 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qm58d" event={"ID":"55e34e94-0758-4ae5-8f18-34b9baac75be","Type":"ContainerStarted","Data":"4c9e28e7ce0f784b684ebaec492f92bf8c2b52017f0249c11de89d511e54ee5d"} Dec 12 16:43:00 crc kubenswrapper[4693]: I1212 16:43:00.273103 4693 generic.go:334] "Generic (PLEG): container finished" podID="55e34e94-0758-4ae5-8f18-34b9baac75be" containerID="1526309d4519b6cb23735af41066a6743bc8905ecaf907b3f72b28cac75350c8" exitCode=0 Dec 12 16:43:00 crc kubenswrapper[4693]: I1212 16:43:00.273650 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qm58d" event={"ID":"55e34e94-0758-4ae5-8f18-34b9baac75be","Type":"ContainerDied","Data":"1526309d4519b6cb23735af41066a6743bc8905ecaf907b3f72b28cac75350c8"} Dec 12 16:43:02 crc kubenswrapper[4693]: I1212 16:43:02.297650 4693 generic.go:334] "Generic (PLEG): container finished" podID="55e34e94-0758-4ae5-8f18-34b9baac75be" containerID="93234411a74edcbb66ca1f0e3d59c8ce03bf4e3693dcd5907b90685ba24f188c" exitCode=0 Dec 12 16:43:02 crc kubenswrapper[4693]: I1212 16:43:02.297766 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qm58d" event={"ID":"55e34e94-0758-4ae5-8f18-34b9baac75be","Type":"ContainerDied","Data":"93234411a74edcbb66ca1f0e3d59c8ce03bf4e3693dcd5907b90685ba24f188c"} Dec 12 16:43:03 crc kubenswrapper[4693]: I1212 16:43:03.314401 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qm58d" event={"ID":"55e34e94-0758-4ae5-8f18-34b9baac75be","Type":"ContainerStarted","Data":"2579567404b296b1123cabf1e17e1f3d63ddbff08e23352c5d6d370a790e39d7"} Dec 12 16:43:03 crc kubenswrapper[4693]: I1212 16:43:03.353766 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qm58d" podStartSLOduration=2.803384917 podStartE2EDuration="5.353747637s" podCreationTimestamp="2025-12-12 16:42:58 +0000 UTC" firstStartedPulling="2025-12-12 16:43:00.277717566 +0000 UTC m=+3407.446357167" lastFinishedPulling="2025-12-12 16:43:02.828080286 +0000 UTC m=+3409.996719887" observedRunningTime="2025-12-12 16:43:03.344901083 +0000 UTC m=+3410.513540694" watchObservedRunningTime="2025-12-12 16:43:03.353747637 +0000 UTC m=+3410.522387238" Dec 12 16:43:08 crc kubenswrapper[4693]: I1212 16:43:08.595818 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qm58d" Dec 12 16:43:08 crc kubenswrapper[4693]: I1212 16:43:08.596310 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qm58d" Dec 12 16:43:08 crc kubenswrapper[4693]: I1212 16:43:08.657765 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qm58d" Dec 12 16:43:09 crc kubenswrapper[4693]: I1212 16:43:09.471858 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qm58d" Dec 12 16:43:09 crc kubenswrapper[4693]: I1212 16:43:09.567610 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qm58d"] Dec 12 16:43:11 crc kubenswrapper[4693]: I1212 16:43:11.408801 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qm58d" podUID="55e34e94-0758-4ae5-8f18-34b9baac75be" containerName="registry-server" containerID="cri-o://2579567404b296b1123cabf1e17e1f3d63ddbff08e23352c5d6d370a790e39d7" gracePeriod=2 Dec 12 16:43:11 crc kubenswrapper[4693]: I1212 16:43:11.997623 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qm58d" Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.080833 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55e34e94-0758-4ae5-8f18-34b9baac75be-utilities\") pod \"55e34e94-0758-4ae5-8f18-34b9baac75be\" (UID: \"55e34e94-0758-4ae5-8f18-34b9baac75be\") " Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.081493 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tvdn\" (UniqueName: \"kubernetes.io/projected/55e34e94-0758-4ae5-8f18-34b9baac75be-kube-api-access-4tvdn\") pod \"55e34e94-0758-4ae5-8f18-34b9baac75be\" (UID: \"55e34e94-0758-4ae5-8f18-34b9baac75be\") " Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.081590 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55e34e94-0758-4ae5-8f18-34b9baac75be-catalog-content\") pod \"55e34e94-0758-4ae5-8f18-34b9baac75be\" (UID: \"55e34e94-0758-4ae5-8f18-34b9baac75be\") " Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.081871 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55e34e94-0758-4ae5-8f18-34b9baac75be-utilities" (OuterVolumeSpecName: "utilities") pod "55e34e94-0758-4ae5-8f18-34b9baac75be" (UID: "55e34e94-0758-4ae5-8f18-34b9baac75be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.084370 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55e34e94-0758-4ae5-8f18-34b9baac75be-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.089111 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55e34e94-0758-4ae5-8f18-34b9baac75be-kube-api-access-4tvdn" (OuterVolumeSpecName: "kube-api-access-4tvdn") pod "55e34e94-0758-4ae5-8f18-34b9baac75be" (UID: "55e34e94-0758-4ae5-8f18-34b9baac75be"). InnerVolumeSpecName "kube-api-access-4tvdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.141742 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55e34e94-0758-4ae5-8f18-34b9baac75be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55e34e94-0758-4ae5-8f18-34b9baac75be" (UID: "55e34e94-0758-4ae5-8f18-34b9baac75be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.186958 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tvdn\" (UniqueName: \"kubernetes.io/projected/55e34e94-0758-4ae5-8f18-34b9baac75be-kube-api-access-4tvdn\") on node \"crc\" DevicePath \"\"" Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.186997 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55e34e94-0758-4ae5-8f18-34b9baac75be-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.421679 4693 generic.go:334] "Generic (PLEG): container finished" podID="55e34e94-0758-4ae5-8f18-34b9baac75be" containerID="2579567404b296b1123cabf1e17e1f3d63ddbff08e23352c5d6d370a790e39d7" exitCode=0 Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.421730 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qm58d" event={"ID":"55e34e94-0758-4ae5-8f18-34b9baac75be","Type":"ContainerDied","Data":"2579567404b296b1123cabf1e17e1f3d63ddbff08e23352c5d6d370a790e39d7"} Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.421765 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qm58d" event={"ID":"55e34e94-0758-4ae5-8f18-34b9baac75be","Type":"ContainerDied","Data":"4c9e28e7ce0f784b684ebaec492f92bf8c2b52017f0249c11de89d511e54ee5d"} Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.421787 4693 scope.go:117] "RemoveContainer" containerID="2579567404b296b1123cabf1e17e1f3d63ddbff08e23352c5d6d370a790e39d7" Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.421785 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qm58d" Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.451451 4693 scope.go:117] "RemoveContainer" containerID="93234411a74edcbb66ca1f0e3d59c8ce03bf4e3693dcd5907b90685ba24f188c" Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.471220 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qm58d"] Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.484925 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qm58d"] Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.510678 4693 scope.go:117] "RemoveContainer" containerID="1526309d4519b6cb23735af41066a6743bc8905ecaf907b3f72b28cac75350c8" Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.546303 4693 scope.go:117] "RemoveContainer" containerID="2579567404b296b1123cabf1e17e1f3d63ddbff08e23352c5d6d370a790e39d7" Dec 12 16:43:12 crc kubenswrapper[4693]: E1212 16:43:12.546704 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2579567404b296b1123cabf1e17e1f3d63ddbff08e23352c5d6d370a790e39d7\": container with ID starting with 2579567404b296b1123cabf1e17e1f3d63ddbff08e23352c5d6d370a790e39d7 not found: ID does not exist" containerID="2579567404b296b1123cabf1e17e1f3d63ddbff08e23352c5d6d370a790e39d7" Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.546740 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2579567404b296b1123cabf1e17e1f3d63ddbff08e23352c5d6d370a790e39d7"} err="failed to get container status \"2579567404b296b1123cabf1e17e1f3d63ddbff08e23352c5d6d370a790e39d7\": rpc error: code = NotFound desc = could not find container \"2579567404b296b1123cabf1e17e1f3d63ddbff08e23352c5d6d370a790e39d7\": container with ID starting with 2579567404b296b1123cabf1e17e1f3d63ddbff08e23352c5d6d370a790e39d7 not found: ID does not exist" Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.546766 4693 scope.go:117] "RemoveContainer" containerID="93234411a74edcbb66ca1f0e3d59c8ce03bf4e3693dcd5907b90685ba24f188c" Dec 12 16:43:12 crc kubenswrapper[4693]: E1212 16:43:12.546991 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93234411a74edcbb66ca1f0e3d59c8ce03bf4e3693dcd5907b90685ba24f188c\": container with ID starting with 93234411a74edcbb66ca1f0e3d59c8ce03bf4e3693dcd5907b90685ba24f188c not found: ID does not exist" containerID="93234411a74edcbb66ca1f0e3d59c8ce03bf4e3693dcd5907b90685ba24f188c" Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.547025 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93234411a74edcbb66ca1f0e3d59c8ce03bf4e3693dcd5907b90685ba24f188c"} err="failed to get container status \"93234411a74edcbb66ca1f0e3d59c8ce03bf4e3693dcd5907b90685ba24f188c\": rpc error: code = NotFound desc = could not find container \"93234411a74edcbb66ca1f0e3d59c8ce03bf4e3693dcd5907b90685ba24f188c\": container with ID starting with 93234411a74edcbb66ca1f0e3d59c8ce03bf4e3693dcd5907b90685ba24f188c not found: ID does not exist" Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.547041 4693 scope.go:117] "RemoveContainer" containerID="1526309d4519b6cb23735af41066a6743bc8905ecaf907b3f72b28cac75350c8" Dec 12 16:43:12 crc kubenswrapper[4693]: E1212 16:43:12.547574 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1526309d4519b6cb23735af41066a6743bc8905ecaf907b3f72b28cac75350c8\": container with ID starting with 1526309d4519b6cb23735af41066a6743bc8905ecaf907b3f72b28cac75350c8 not found: ID does not exist" containerID="1526309d4519b6cb23735af41066a6743bc8905ecaf907b3f72b28cac75350c8" Dec 12 16:43:12 crc kubenswrapper[4693]: I1212 16:43:12.547602 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1526309d4519b6cb23735af41066a6743bc8905ecaf907b3f72b28cac75350c8"} err="failed to get container status \"1526309d4519b6cb23735af41066a6743bc8905ecaf907b3f72b28cac75350c8\": rpc error: code = NotFound desc = could not find container \"1526309d4519b6cb23735af41066a6743bc8905ecaf907b3f72b28cac75350c8\": container with ID starting with 1526309d4519b6cb23735af41066a6743bc8905ecaf907b3f72b28cac75350c8 not found: ID does not exist" Dec 12 16:43:13 crc kubenswrapper[4693]: I1212 16:43:13.376406 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55e34e94-0758-4ae5-8f18-34b9baac75be" path="/var/lib/kubelet/pods/55e34e94-0758-4ae5-8f18-34b9baac75be/volumes" Dec 12 16:43:19 crc kubenswrapper[4693]: I1212 16:43:19.502718 4693 generic.go:334] "Generic (PLEG): container finished" podID="febe4770-6951-4b71-89ca-397954381302" containerID="4fe019b3d4e43337b2d5b306c51101a497029fe907fa8de4b2ab293cb4766b97" exitCode=0 Dec 12 16:43:19 crc kubenswrapper[4693]: I1212 16:43:19.502848 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" event={"ID":"febe4770-6951-4b71-89ca-397954381302","Type":"ContainerDied","Data":"4fe019b3d4e43337b2d5b306c51101a497029fe907fa8de4b2ab293cb4766b97"} Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.014721 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.115398 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-inventory\") pod \"febe4770-6951-4b71-89ca-397954381302\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.115532 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ssh-key\") pod \"febe4770-6951-4b71-89ca-397954381302\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.115909 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ceilometer-compute-config-data-0\") pod \"febe4770-6951-4b71-89ca-397954381302\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.117017 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfgvh\" (UniqueName: \"kubernetes.io/projected/febe4770-6951-4b71-89ca-397954381302-kube-api-access-xfgvh\") pod \"febe4770-6951-4b71-89ca-397954381302\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.117200 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ceilometer-compute-config-data-2\") pod \"febe4770-6951-4b71-89ca-397954381302\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.117297 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-telemetry-combined-ca-bundle\") pod \"febe4770-6951-4b71-89ca-397954381302\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.117338 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ceilometer-compute-config-data-1\") pod \"febe4770-6951-4b71-89ca-397954381302\" (UID: \"febe4770-6951-4b71-89ca-397954381302\") " Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.123775 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "febe4770-6951-4b71-89ca-397954381302" (UID: "febe4770-6951-4b71-89ca-397954381302"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.143750 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/febe4770-6951-4b71-89ca-397954381302-kube-api-access-xfgvh" (OuterVolumeSpecName: "kube-api-access-xfgvh") pod "febe4770-6951-4b71-89ca-397954381302" (UID: "febe4770-6951-4b71-89ca-397954381302"). InnerVolumeSpecName "kube-api-access-xfgvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.157935 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "febe4770-6951-4b71-89ca-397954381302" (UID: "febe4770-6951-4b71-89ca-397954381302"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.159582 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "febe4770-6951-4b71-89ca-397954381302" (UID: "febe4770-6951-4b71-89ca-397954381302"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.164743 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "febe4770-6951-4b71-89ca-397954381302" (UID: "febe4770-6951-4b71-89ca-397954381302"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.169292 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "febe4770-6951-4b71-89ca-397954381302" (UID: "febe4770-6951-4b71-89ca-397954381302"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.169608 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-inventory" (OuterVolumeSpecName: "inventory") pod "febe4770-6951-4b71-89ca-397954381302" (UID: "febe4770-6951-4b71-89ca-397954381302"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.220814 4693 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.220855 4693 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.220870 4693 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.220884 4693 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.220901 4693 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.220914 4693 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/febe4770-6951-4b71-89ca-397954381302-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.220927 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfgvh\" (UniqueName: \"kubernetes.io/projected/febe4770-6951-4b71-89ca-397954381302-kube-api-access-xfgvh\") on node \"crc\" DevicePath \"\"" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.530519 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" event={"ID":"febe4770-6951-4b71-89ca-397954381302","Type":"ContainerDied","Data":"59626f05ed0105a705773cd58bfdebc21a29fef580c5fae3c7f9eb9200291c85"} Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.530968 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59626f05ed0105a705773cd58bfdebc21a29fef580c5fae3c7f9eb9200291c85" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.530607 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-77w6k" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.649651 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8"] Dec 12 16:43:21 crc kubenswrapper[4693]: E1212 16:43:21.651084 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55e34e94-0758-4ae5-8f18-34b9baac75be" containerName="extract-utilities" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.651110 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="55e34e94-0758-4ae5-8f18-34b9baac75be" containerName="extract-utilities" Dec 12 16:43:21 crc kubenswrapper[4693]: E1212 16:43:21.651149 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55e34e94-0758-4ae5-8f18-34b9baac75be" containerName="extract-content" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.651158 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="55e34e94-0758-4ae5-8f18-34b9baac75be" containerName="extract-content" Dec 12 16:43:21 crc kubenswrapper[4693]: E1212 16:43:21.651180 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="febe4770-6951-4b71-89ca-397954381302" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.651192 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="febe4770-6951-4b71-89ca-397954381302" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 12 16:43:21 crc kubenswrapper[4693]: E1212 16:43:21.651240 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55e34e94-0758-4ae5-8f18-34b9baac75be" containerName="registry-server" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.651247 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="55e34e94-0758-4ae5-8f18-34b9baac75be" containerName="registry-server" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.651822 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="55e34e94-0758-4ae5-8f18-34b9baac75be" containerName="registry-server" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.651863 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="febe4770-6951-4b71-89ca-397954381302" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.653161 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.666301 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlgf7" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.666783 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.667821 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.668464 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.668512 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.676296 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8"] Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.735976 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.736130 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.736192 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88nbg\" (UniqueName: \"kubernetes.io/projected/022d69a8-3872-41ec-83a1-8f9756d6226b-kube-api-access-88nbg\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.736261 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.736583 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.736696 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.736751 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.839233 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.839350 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.839402 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.839453 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.839496 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.839556 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88nbg\" (UniqueName: \"kubernetes.io/projected/022d69a8-3872-41ec-83a1-8f9756d6226b-kube-api-access-88nbg\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.839607 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.846842 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.847495 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.848052 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.848396 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.848883 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.852086 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.859568 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88nbg\" (UniqueName: \"kubernetes.io/projected/022d69a8-3872-41ec-83a1-8f9756d6226b-kube-api-access-88nbg\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:21 crc kubenswrapper[4693]: I1212 16:43:21.999637 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:43:22 crc kubenswrapper[4693]: I1212 16:43:22.538949 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8"] Dec 12 16:43:23 crc kubenswrapper[4693]: I1212 16:43:23.555563 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" event={"ID":"022d69a8-3872-41ec-83a1-8f9756d6226b","Type":"ContainerStarted","Data":"e6399464963f07be6f6d2a10b4146d6eaacd71209ec538f71dbc6d81cd5c818c"} Dec 12 16:43:24 crc kubenswrapper[4693]: I1212 16:43:24.569136 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" event={"ID":"022d69a8-3872-41ec-83a1-8f9756d6226b","Type":"ContainerStarted","Data":"440427e6c77fc8a8a9154794964b1d5b12c4e5d084f49b867fa9e4c755965261"} Dec 12 16:43:24 crc kubenswrapper[4693]: I1212 16:43:24.605973 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" podStartSLOduration=2.879357651 podStartE2EDuration="3.605947319s" podCreationTimestamp="2025-12-12 16:43:21 +0000 UTC" firstStartedPulling="2025-12-12 16:43:22.535254122 +0000 UTC m=+3429.703893723" lastFinishedPulling="2025-12-12 16:43:23.26184379 +0000 UTC m=+3430.430483391" observedRunningTime="2025-12-12 16:43:24.591348603 +0000 UTC m=+3431.759988214" watchObservedRunningTime="2025-12-12 16:43:24.605947319 +0000 UTC m=+3431.774586940" Dec 12 16:43:42 crc kubenswrapper[4693]: I1212 16:43:42.530906 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:43:42 crc kubenswrapper[4693]: I1212 16:43:42.531708 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:44:12 crc kubenswrapper[4693]: I1212 16:44:12.530454 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:44:12 crc kubenswrapper[4693]: I1212 16:44:12.530918 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:44:42 crc kubenswrapper[4693]: I1212 16:44:42.530021 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:44:42 crc kubenswrapper[4693]: I1212 16:44:42.530505 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:44:42 crc kubenswrapper[4693]: I1212 16:44:42.530566 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 16:44:42 crc kubenswrapper[4693]: I1212 16:44:42.531513 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e62673de23f46e5e24bbb6ce63f6df95e4e0371a449f0b6d43f70818c45017fc"} pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:44:42 crc kubenswrapper[4693]: I1212 16:44:42.531591 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" containerID="cri-o://e62673de23f46e5e24bbb6ce63f6df95e4e0371a449f0b6d43f70818c45017fc" gracePeriod=600 Dec 12 16:44:42 crc kubenswrapper[4693]: I1212 16:44:42.986215 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s495t"] Dec 12 16:44:42 crc kubenswrapper[4693]: I1212 16:44:42.989956 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s495t" Dec 12 16:44:43 crc kubenswrapper[4693]: I1212 16:44:43.007092 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s495t"] Dec 12 16:44:43 crc kubenswrapper[4693]: I1212 16:44:43.177045 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b519b0f-fbd6-4eca-aa48-0248609cf4d0-utilities\") pod \"certified-operators-s495t\" (UID: \"4b519b0f-fbd6-4eca-aa48-0248609cf4d0\") " pod="openshift-marketplace/certified-operators-s495t" Dec 12 16:44:43 crc kubenswrapper[4693]: I1212 16:44:43.177503 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b519b0f-fbd6-4eca-aa48-0248609cf4d0-catalog-content\") pod \"certified-operators-s495t\" (UID: \"4b519b0f-fbd6-4eca-aa48-0248609cf4d0\") " pod="openshift-marketplace/certified-operators-s495t" Dec 12 16:44:43 crc kubenswrapper[4693]: I1212 16:44:43.177602 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gff98\" (UniqueName: \"kubernetes.io/projected/4b519b0f-fbd6-4eca-aa48-0248609cf4d0-kube-api-access-gff98\") pod \"certified-operators-s495t\" (UID: \"4b519b0f-fbd6-4eca-aa48-0248609cf4d0\") " pod="openshift-marketplace/certified-operators-s495t" Dec 12 16:44:43 crc kubenswrapper[4693]: I1212 16:44:43.279708 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b519b0f-fbd6-4eca-aa48-0248609cf4d0-utilities\") pod \"certified-operators-s495t\" (UID: \"4b519b0f-fbd6-4eca-aa48-0248609cf4d0\") " pod="openshift-marketplace/certified-operators-s495t" Dec 12 16:44:43 crc kubenswrapper[4693]: I1212 16:44:43.279816 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b519b0f-fbd6-4eca-aa48-0248609cf4d0-catalog-content\") pod \"certified-operators-s495t\" (UID: \"4b519b0f-fbd6-4eca-aa48-0248609cf4d0\") " pod="openshift-marketplace/certified-operators-s495t" Dec 12 16:44:43 crc kubenswrapper[4693]: I1212 16:44:43.279883 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gff98\" (UniqueName: \"kubernetes.io/projected/4b519b0f-fbd6-4eca-aa48-0248609cf4d0-kube-api-access-gff98\") pod \"certified-operators-s495t\" (UID: \"4b519b0f-fbd6-4eca-aa48-0248609cf4d0\") " pod="openshift-marketplace/certified-operators-s495t" Dec 12 16:44:43 crc kubenswrapper[4693]: I1212 16:44:43.280298 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b519b0f-fbd6-4eca-aa48-0248609cf4d0-utilities\") pod \"certified-operators-s495t\" (UID: \"4b519b0f-fbd6-4eca-aa48-0248609cf4d0\") " pod="openshift-marketplace/certified-operators-s495t" Dec 12 16:44:43 crc kubenswrapper[4693]: I1212 16:44:43.280329 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b519b0f-fbd6-4eca-aa48-0248609cf4d0-catalog-content\") pod \"certified-operators-s495t\" (UID: \"4b519b0f-fbd6-4eca-aa48-0248609cf4d0\") " pod="openshift-marketplace/certified-operators-s495t" Dec 12 16:44:43 crc kubenswrapper[4693]: I1212 16:44:43.301522 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gff98\" (UniqueName: \"kubernetes.io/projected/4b519b0f-fbd6-4eca-aa48-0248609cf4d0-kube-api-access-gff98\") pod \"certified-operators-s495t\" (UID: \"4b519b0f-fbd6-4eca-aa48-0248609cf4d0\") " pod="openshift-marketplace/certified-operators-s495t" Dec 12 16:44:43 crc kubenswrapper[4693]: I1212 16:44:43.321490 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s495t" Dec 12 16:44:43 crc kubenswrapper[4693]: I1212 16:44:43.503702 4693 generic.go:334] "Generic (PLEG): container finished" podID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerID="e62673de23f46e5e24bbb6ce63f6df95e4e0371a449f0b6d43f70818c45017fc" exitCode=0 Dec 12 16:44:43 crc kubenswrapper[4693]: I1212 16:44:43.504010 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerDied","Data":"e62673de23f46e5e24bbb6ce63f6df95e4e0371a449f0b6d43f70818c45017fc"} Dec 12 16:44:43 crc kubenswrapper[4693]: I1212 16:44:43.504039 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerStarted","Data":"c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273"} Dec 12 16:44:43 crc kubenswrapper[4693]: I1212 16:44:43.504054 4693 scope.go:117] "RemoveContainer" containerID="35825a8dd668cc271aea7a156119db8bad82b310e20b77efa25ee0f57e980237" Dec 12 16:44:43 crc kubenswrapper[4693]: I1212 16:44:43.965955 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s495t"] Dec 12 16:44:44 crc kubenswrapper[4693]: I1212 16:44:44.523148 4693 generic.go:334] "Generic (PLEG): container finished" podID="4b519b0f-fbd6-4eca-aa48-0248609cf4d0" containerID="304dbdc19545b57f25f4feeac0d255a1e8a6d322d6300ec99d4bfca26810ef6c" exitCode=0 Dec 12 16:44:44 crc kubenswrapper[4693]: I1212 16:44:44.523330 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s495t" event={"ID":"4b519b0f-fbd6-4eca-aa48-0248609cf4d0","Type":"ContainerDied","Data":"304dbdc19545b57f25f4feeac0d255a1e8a6d322d6300ec99d4bfca26810ef6c"} Dec 12 16:44:44 crc kubenswrapper[4693]: I1212 16:44:44.523727 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s495t" event={"ID":"4b519b0f-fbd6-4eca-aa48-0248609cf4d0","Type":"ContainerStarted","Data":"71b2ba1d0a123738bf60e4a90485699ca2b72b8fd0a572deb5a561e3e8e7d49d"} Dec 12 16:44:45 crc kubenswrapper[4693]: I1212 16:44:45.566059 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s495t" event={"ID":"4b519b0f-fbd6-4eca-aa48-0248609cf4d0","Type":"ContainerStarted","Data":"5c7ab1d1d9453e72eda167f268f5b35fddc94f8481c81ae47ea3909b7f65f492"} Dec 12 16:44:47 crc kubenswrapper[4693]: I1212 16:44:47.604052 4693 generic.go:334] "Generic (PLEG): container finished" podID="4b519b0f-fbd6-4eca-aa48-0248609cf4d0" containerID="5c7ab1d1d9453e72eda167f268f5b35fddc94f8481c81ae47ea3909b7f65f492" exitCode=0 Dec 12 16:44:47 crc kubenswrapper[4693]: I1212 16:44:47.604452 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s495t" event={"ID":"4b519b0f-fbd6-4eca-aa48-0248609cf4d0","Type":"ContainerDied","Data":"5c7ab1d1d9453e72eda167f268f5b35fddc94f8481c81ae47ea3909b7f65f492"} Dec 12 16:44:48 crc kubenswrapper[4693]: I1212 16:44:48.621236 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s495t" event={"ID":"4b519b0f-fbd6-4eca-aa48-0248609cf4d0","Type":"ContainerStarted","Data":"d7d6085731a2f62ede47d292a636bc6141645819c928ef233c2d190ea46cbd84"} Dec 12 16:44:48 crc kubenswrapper[4693]: I1212 16:44:48.648338 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s495t" podStartSLOduration=3.041367004 podStartE2EDuration="6.648316504s" podCreationTimestamp="2025-12-12 16:44:42 +0000 UTC" firstStartedPulling="2025-12-12 16:44:44.525978025 +0000 UTC m=+3511.694617626" lastFinishedPulling="2025-12-12 16:44:48.132927525 +0000 UTC m=+3515.301567126" observedRunningTime="2025-12-12 16:44:48.640923608 +0000 UTC m=+3515.809563219" watchObservedRunningTime="2025-12-12 16:44:48.648316504 +0000 UTC m=+3515.816956115" Dec 12 16:44:53 crc kubenswrapper[4693]: I1212 16:44:53.322001 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s495t" Dec 12 16:44:53 crc kubenswrapper[4693]: I1212 16:44:53.322517 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s495t" Dec 12 16:44:53 crc kubenswrapper[4693]: I1212 16:44:53.386706 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s495t" Dec 12 16:44:53 crc kubenswrapper[4693]: I1212 16:44:53.741725 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s495t" Dec 12 16:44:56 crc kubenswrapper[4693]: I1212 16:44:56.959019 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s495t"] Dec 12 16:44:56 crc kubenswrapper[4693]: I1212 16:44:56.959861 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s495t" podUID="4b519b0f-fbd6-4eca-aa48-0248609cf4d0" containerName="registry-server" containerID="cri-o://d7d6085731a2f62ede47d292a636bc6141645819c928ef233c2d190ea46cbd84" gracePeriod=2 Dec 12 16:44:57 crc kubenswrapper[4693]: I1212 16:44:57.875687 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" podUID="26cbeab5-89ba-425a-87c0-8797ceb6957a" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 16:44:59 crc kubenswrapper[4693]: I1212 16:44:59.749821 4693 generic.go:334] "Generic (PLEG): container finished" podID="4b519b0f-fbd6-4eca-aa48-0248609cf4d0" containerID="d7d6085731a2f62ede47d292a636bc6141645819c928ef233c2d190ea46cbd84" exitCode=0 Dec 12 16:44:59 crc kubenswrapper[4693]: I1212 16:44:59.749921 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s495t" event={"ID":"4b519b0f-fbd6-4eca-aa48-0248609cf4d0","Type":"ContainerDied","Data":"d7d6085731a2f62ede47d292a636bc6141645819c928ef233c2d190ea46cbd84"} Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.152210 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425965-2nrfg"] Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.154640 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425965-2nrfg" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.158858 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.158976 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.166478 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425965-2nrfg"] Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.186074 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vnmj\" (UniqueName: \"kubernetes.io/projected/8c28e8e6-e077-483e-850b-1b87a30f6472-kube-api-access-6vnmj\") pod \"collect-profiles-29425965-2nrfg\" (UID: \"8c28e8e6-e077-483e-850b-1b87a30f6472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425965-2nrfg" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.186132 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c28e8e6-e077-483e-850b-1b87a30f6472-secret-volume\") pod \"collect-profiles-29425965-2nrfg\" (UID: \"8c28e8e6-e077-483e-850b-1b87a30f6472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425965-2nrfg" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.186387 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c28e8e6-e077-483e-850b-1b87a30f6472-config-volume\") pod \"collect-profiles-29425965-2nrfg\" (UID: \"8c28e8e6-e077-483e-850b-1b87a30f6472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425965-2nrfg" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.194724 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s495t" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.288562 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gff98\" (UniqueName: \"kubernetes.io/projected/4b519b0f-fbd6-4eca-aa48-0248609cf4d0-kube-api-access-gff98\") pod \"4b519b0f-fbd6-4eca-aa48-0248609cf4d0\" (UID: \"4b519b0f-fbd6-4eca-aa48-0248609cf4d0\") " Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.288653 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b519b0f-fbd6-4eca-aa48-0248609cf4d0-utilities\") pod \"4b519b0f-fbd6-4eca-aa48-0248609cf4d0\" (UID: \"4b519b0f-fbd6-4eca-aa48-0248609cf4d0\") " Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.288887 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b519b0f-fbd6-4eca-aa48-0248609cf4d0-catalog-content\") pod \"4b519b0f-fbd6-4eca-aa48-0248609cf4d0\" (UID: \"4b519b0f-fbd6-4eca-aa48-0248609cf4d0\") " Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.289311 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c28e8e6-e077-483e-850b-1b87a30f6472-secret-volume\") pod \"collect-profiles-29425965-2nrfg\" (UID: \"8c28e8e6-e077-483e-850b-1b87a30f6472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425965-2nrfg" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.289629 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c28e8e6-e077-483e-850b-1b87a30f6472-config-volume\") pod \"collect-profiles-29425965-2nrfg\" (UID: \"8c28e8e6-e077-483e-850b-1b87a30f6472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425965-2nrfg" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.289784 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vnmj\" (UniqueName: \"kubernetes.io/projected/8c28e8e6-e077-483e-850b-1b87a30f6472-kube-api-access-6vnmj\") pod \"collect-profiles-29425965-2nrfg\" (UID: \"8c28e8e6-e077-483e-850b-1b87a30f6472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425965-2nrfg" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.290330 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c28e8e6-e077-483e-850b-1b87a30f6472-config-volume\") pod \"collect-profiles-29425965-2nrfg\" (UID: \"8c28e8e6-e077-483e-850b-1b87a30f6472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425965-2nrfg" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.296164 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c28e8e6-e077-483e-850b-1b87a30f6472-secret-volume\") pod \"collect-profiles-29425965-2nrfg\" (UID: \"8c28e8e6-e077-483e-850b-1b87a30f6472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425965-2nrfg" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.296329 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b519b0f-fbd6-4eca-aa48-0248609cf4d0-utilities" (OuterVolumeSpecName: "utilities") pod "4b519b0f-fbd6-4eca-aa48-0248609cf4d0" (UID: "4b519b0f-fbd6-4eca-aa48-0248609cf4d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.296414 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b519b0f-fbd6-4eca-aa48-0248609cf4d0-kube-api-access-gff98" (OuterVolumeSpecName: "kube-api-access-gff98") pod "4b519b0f-fbd6-4eca-aa48-0248609cf4d0" (UID: "4b519b0f-fbd6-4eca-aa48-0248609cf4d0"). InnerVolumeSpecName "kube-api-access-gff98". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.314891 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vnmj\" (UniqueName: \"kubernetes.io/projected/8c28e8e6-e077-483e-850b-1b87a30f6472-kube-api-access-6vnmj\") pod \"collect-profiles-29425965-2nrfg\" (UID: \"8c28e8e6-e077-483e-850b-1b87a30f6472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425965-2nrfg" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.355818 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b519b0f-fbd6-4eca-aa48-0248609cf4d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b519b0f-fbd6-4eca-aa48-0248609cf4d0" (UID: "4b519b0f-fbd6-4eca-aa48-0248609cf4d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.391670 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gff98\" (UniqueName: \"kubernetes.io/projected/4b519b0f-fbd6-4eca-aa48-0248609cf4d0-kube-api-access-gff98\") on node \"crc\" DevicePath \"\"" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.391702 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b519b0f-fbd6-4eca-aa48-0248609cf4d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.391711 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b519b0f-fbd6-4eca-aa48-0248609cf4d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.511337 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425965-2nrfg" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.764457 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s495t" event={"ID":"4b519b0f-fbd6-4eca-aa48-0248609cf4d0","Type":"ContainerDied","Data":"71b2ba1d0a123738bf60e4a90485699ca2b72b8fd0a572deb5a561e3e8e7d49d"} Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.764506 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s495t" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.764737 4693 scope.go:117] "RemoveContainer" containerID="d7d6085731a2f62ede47d292a636bc6141645819c928ef233c2d190ea46cbd84" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.807861 4693 scope.go:117] "RemoveContainer" containerID="5c7ab1d1d9453e72eda167f268f5b35fddc94f8481c81ae47ea3909b7f65f492" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.816058 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s495t"] Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.831593 4693 scope.go:117] "RemoveContainer" containerID="304dbdc19545b57f25f4feeac0d255a1e8a6d322d6300ec99d4bfca26810ef6c" Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.831758 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s495t"] Dec 12 16:45:00 crc kubenswrapper[4693]: I1212 16:45:00.983534 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425965-2nrfg"] Dec 12 16:45:00 crc kubenswrapper[4693]: W1212 16:45:00.986586 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c28e8e6_e077_483e_850b_1b87a30f6472.slice/crio-784dffc5edca3df2e4228812b2addf1ebc34b135acd64db61713adc1f4cc5fe6 WatchSource:0}: Error finding container 784dffc5edca3df2e4228812b2addf1ebc34b135acd64db61713adc1f4cc5fe6: Status 404 returned error can't find the container with id 784dffc5edca3df2e4228812b2addf1ebc34b135acd64db61713adc1f4cc5fe6 Dec 12 16:45:01 crc kubenswrapper[4693]: I1212 16:45:01.372801 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b519b0f-fbd6-4eca-aa48-0248609cf4d0" path="/var/lib/kubelet/pods/4b519b0f-fbd6-4eca-aa48-0248609cf4d0/volumes" Dec 12 16:45:01 crc kubenswrapper[4693]: I1212 16:45:01.780259 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425965-2nrfg" event={"ID":"8c28e8e6-e077-483e-850b-1b87a30f6472","Type":"ContainerStarted","Data":"784dffc5edca3df2e4228812b2addf1ebc34b135acd64db61713adc1f4cc5fe6"} Dec 12 16:45:02 crc kubenswrapper[4693]: I1212 16:45:02.797139 4693 generic.go:334] "Generic (PLEG): container finished" podID="8c28e8e6-e077-483e-850b-1b87a30f6472" containerID="1551ed238afcc6ec49f26eb30a456e58324eecd973058d1b30766595e7c5e906" exitCode=0 Dec 12 16:45:02 crc kubenswrapper[4693]: I1212 16:45:02.797236 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425965-2nrfg" event={"ID":"8c28e8e6-e077-483e-850b-1b87a30f6472","Type":"ContainerDied","Data":"1551ed238afcc6ec49f26eb30a456e58324eecd973058d1b30766595e7c5e906"} Dec 12 16:45:04 crc kubenswrapper[4693]: I1212 16:45:04.329801 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425965-2nrfg" Dec 12 16:45:04 crc kubenswrapper[4693]: I1212 16:45:04.426881 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vnmj\" (UniqueName: \"kubernetes.io/projected/8c28e8e6-e077-483e-850b-1b87a30f6472-kube-api-access-6vnmj\") pod \"8c28e8e6-e077-483e-850b-1b87a30f6472\" (UID: \"8c28e8e6-e077-483e-850b-1b87a30f6472\") " Dec 12 16:45:04 crc kubenswrapper[4693]: I1212 16:45:04.427386 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c28e8e6-e077-483e-850b-1b87a30f6472-secret-volume\") pod \"8c28e8e6-e077-483e-850b-1b87a30f6472\" (UID: \"8c28e8e6-e077-483e-850b-1b87a30f6472\") " Dec 12 16:45:04 crc kubenswrapper[4693]: I1212 16:45:04.427469 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c28e8e6-e077-483e-850b-1b87a30f6472-config-volume\") pod \"8c28e8e6-e077-483e-850b-1b87a30f6472\" (UID: \"8c28e8e6-e077-483e-850b-1b87a30f6472\") " Dec 12 16:45:04 crc kubenswrapper[4693]: I1212 16:45:04.428202 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c28e8e6-e077-483e-850b-1b87a30f6472-config-volume" (OuterVolumeSpecName: "config-volume") pod "8c28e8e6-e077-483e-850b-1b87a30f6472" (UID: "8c28e8e6-e077-483e-850b-1b87a30f6472"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 16:45:04 crc kubenswrapper[4693]: I1212 16:45:04.432836 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c28e8e6-e077-483e-850b-1b87a30f6472-kube-api-access-6vnmj" (OuterVolumeSpecName: "kube-api-access-6vnmj") pod "8c28e8e6-e077-483e-850b-1b87a30f6472" (UID: "8c28e8e6-e077-483e-850b-1b87a30f6472"). InnerVolumeSpecName "kube-api-access-6vnmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:45:04 crc kubenswrapper[4693]: I1212 16:45:04.438452 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c28e8e6-e077-483e-850b-1b87a30f6472-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8c28e8e6-e077-483e-850b-1b87a30f6472" (UID: "8c28e8e6-e077-483e-850b-1b87a30f6472"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:45:04 crc kubenswrapper[4693]: I1212 16:45:04.530397 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vnmj\" (UniqueName: \"kubernetes.io/projected/8c28e8e6-e077-483e-850b-1b87a30f6472-kube-api-access-6vnmj\") on node \"crc\" DevicePath \"\"" Dec 12 16:45:04 crc kubenswrapper[4693]: I1212 16:45:04.530439 4693 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c28e8e6-e077-483e-850b-1b87a30f6472-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 16:45:04 crc kubenswrapper[4693]: I1212 16:45:04.530449 4693 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c28e8e6-e077-483e-850b-1b87a30f6472-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 16:45:04 crc kubenswrapper[4693]: I1212 16:45:04.818265 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425965-2nrfg" event={"ID":"8c28e8e6-e077-483e-850b-1b87a30f6472","Type":"ContainerDied","Data":"784dffc5edca3df2e4228812b2addf1ebc34b135acd64db61713adc1f4cc5fe6"} Dec 12 16:45:04 crc kubenswrapper[4693]: I1212 16:45:04.818327 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="784dffc5edca3df2e4228812b2addf1ebc34b135acd64db61713adc1f4cc5fe6" Dec 12 16:45:04 crc kubenswrapper[4693]: I1212 16:45:04.818338 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425965-2nrfg" Dec 12 16:45:05 crc kubenswrapper[4693]: I1212 16:45:05.435725 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t"] Dec 12 16:45:05 crc kubenswrapper[4693]: I1212 16:45:05.446796 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425920-vdz7t"] Dec 12 16:45:07 crc kubenswrapper[4693]: I1212 16:45:07.372842 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50cd7719-da3b-43a9-980b-60c1709e862e" path="/var/lib/kubelet/pods/50cd7719-da3b-43a9-980b-60c1709e862e/volumes" Dec 12 16:45:40 crc kubenswrapper[4693]: I1212 16:45:40.306404 4693 generic.go:334] "Generic (PLEG): container finished" podID="022d69a8-3872-41ec-83a1-8f9756d6226b" containerID="440427e6c77fc8a8a9154794964b1d5b12c4e5d084f49b867fa9e4c755965261" exitCode=0 Dec 12 16:45:40 crc kubenswrapper[4693]: I1212 16:45:40.307318 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" event={"ID":"022d69a8-3872-41ec-83a1-8f9756d6226b","Type":"ContainerDied","Data":"440427e6c77fc8a8a9154794964b1d5b12c4e5d084f49b867fa9e4c755965261"} Dec 12 16:45:41 crc kubenswrapper[4693]: I1212 16:45:41.871675 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:45:41 crc kubenswrapper[4693]: I1212 16:45:41.948494 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ceilometer-ipmi-config-data-2\") pod \"022d69a8-3872-41ec-83a1-8f9756d6226b\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " Dec 12 16:45:41 crc kubenswrapper[4693]: I1212 16:45:41.948681 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ceilometer-ipmi-config-data-1\") pod \"022d69a8-3872-41ec-83a1-8f9756d6226b\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " Dec 12 16:45:41 crc kubenswrapper[4693]: I1212 16:45:41.948877 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ceilometer-ipmi-config-data-0\") pod \"022d69a8-3872-41ec-83a1-8f9756d6226b\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " Dec 12 16:45:41 crc kubenswrapper[4693]: I1212 16:45:41.948955 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-inventory\") pod \"022d69a8-3872-41ec-83a1-8f9756d6226b\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " Dec 12 16:45:41 crc kubenswrapper[4693]: I1212 16:45:41.948999 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88nbg\" (UniqueName: \"kubernetes.io/projected/022d69a8-3872-41ec-83a1-8f9756d6226b-kube-api-access-88nbg\") pod \"022d69a8-3872-41ec-83a1-8f9756d6226b\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " Dec 12 16:45:41 crc kubenswrapper[4693]: I1212 16:45:41.949072 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-telemetry-power-monitoring-combined-ca-bundle\") pod \"022d69a8-3872-41ec-83a1-8f9756d6226b\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " Dec 12 16:45:41 crc kubenswrapper[4693]: I1212 16:45:41.949108 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ssh-key\") pod \"022d69a8-3872-41ec-83a1-8f9756d6226b\" (UID: \"022d69a8-3872-41ec-83a1-8f9756d6226b\") " Dec 12 16:45:41 crc kubenswrapper[4693]: I1212 16:45:41.955858 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "022d69a8-3872-41ec-83a1-8f9756d6226b" (UID: "022d69a8-3872-41ec-83a1-8f9756d6226b"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:45:41 crc kubenswrapper[4693]: I1212 16:45:41.956173 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/022d69a8-3872-41ec-83a1-8f9756d6226b-kube-api-access-88nbg" (OuterVolumeSpecName: "kube-api-access-88nbg") pod "022d69a8-3872-41ec-83a1-8f9756d6226b" (UID: "022d69a8-3872-41ec-83a1-8f9756d6226b"). InnerVolumeSpecName "kube-api-access-88nbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:45:41 crc kubenswrapper[4693]: I1212 16:45:41.990806 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "022d69a8-3872-41ec-83a1-8f9756d6226b" (UID: "022d69a8-3872-41ec-83a1-8f9756d6226b"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:45:41 crc kubenswrapper[4693]: I1212 16:45:41.994749 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "022d69a8-3872-41ec-83a1-8f9756d6226b" (UID: "022d69a8-3872-41ec-83a1-8f9756d6226b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:45:41 crc kubenswrapper[4693]: I1212 16:45:41.995190 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "022d69a8-3872-41ec-83a1-8f9756d6226b" (UID: "022d69a8-3872-41ec-83a1-8f9756d6226b"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.002580 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-inventory" (OuterVolumeSpecName: "inventory") pod "022d69a8-3872-41ec-83a1-8f9756d6226b" (UID: "022d69a8-3872-41ec-83a1-8f9756d6226b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.020996 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "022d69a8-3872-41ec-83a1-8f9756d6226b" (UID: "022d69a8-3872-41ec-83a1-8f9756d6226b"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.052513 4693 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.052551 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88nbg\" (UniqueName: \"kubernetes.io/projected/022d69a8-3872-41ec-83a1-8f9756d6226b-kube-api-access-88nbg\") on node \"crc\" DevicePath \"\"" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.052566 4693 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.052580 4693 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.052589 4693 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.052599 4693 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.052609 4693 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/022d69a8-3872-41ec-83a1-8f9756d6226b-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.336039 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" event={"ID":"022d69a8-3872-41ec-83a1-8f9756d6226b","Type":"ContainerDied","Data":"e6399464963f07be6f6d2a10b4146d6eaacd71209ec538f71dbc6d81cd5c818c"} Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.336426 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6399464963f07be6f6d2a10b4146d6eaacd71209ec538f71dbc6d81cd5c818c" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.336140 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-826z8" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.467327 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk"] Dec 12 16:45:42 crc kubenswrapper[4693]: E1212 16:45:42.467976 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b519b0f-fbd6-4eca-aa48-0248609cf4d0" containerName="extract-utilities" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.468002 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b519b0f-fbd6-4eca-aa48-0248609cf4d0" containerName="extract-utilities" Dec 12 16:45:42 crc kubenswrapper[4693]: E1212 16:45:42.468036 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c28e8e6-e077-483e-850b-1b87a30f6472" containerName="collect-profiles" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.468047 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c28e8e6-e077-483e-850b-1b87a30f6472" containerName="collect-profiles" Dec 12 16:45:42 crc kubenswrapper[4693]: E1212 16:45:42.468060 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b519b0f-fbd6-4eca-aa48-0248609cf4d0" containerName="extract-content" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.468069 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b519b0f-fbd6-4eca-aa48-0248609cf4d0" containerName="extract-content" Dec 12 16:45:42 crc kubenswrapper[4693]: E1212 16:45:42.468084 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b519b0f-fbd6-4eca-aa48-0248609cf4d0" containerName="registry-server" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.468090 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b519b0f-fbd6-4eca-aa48-0248609cf4d0" containerName="registry-server" Dec 12 16:45:42 crc kubenswrapper[4693]: E1212 16:45:42.468114 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022d69a8-3872-41ec-83a1-8f9756d6226b" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.468122 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="022d69a8-3872-41ec-83a1-8f9756d6226b" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.468417 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b519b0f-fbd6-4eca-aa48-0248609cf4d0" containerName="registry-server" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.468458 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c28e8e6-e077-483e-850b-1b87a30f6472" containerName="collect-profiles" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.468477 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="022d69a8-3872-41ec-83a1-8f9756d6226b" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.469510 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.472136 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.472490 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.472615 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlgf7" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.473123 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.473691 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.506369 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk"] Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.563402 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-dljqk\" (UID: \"5d9992e5-db20-4681-932e-980855ff3006\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.563486 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-dljqk\" (UID: \"5d9992e5-db20-4681-932e-980855ff3006\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.563572 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6br9\" (UniqueName: \"kubernetes.io/projected/5d9992e5-db20-4681-932e-980855ff3006-kube-api-access-n6br9\") pod \"logging-edpm-deployment-openstack-edpm-ipam-dljqk\" (UID: \"5d9992e5-db20-4681-932e-980855ff3006\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.563605 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-dljqk\" (UID: \"5d9992e5-db20-4681-932e-980855ff3006\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.563758 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-dljqk\" (UID: \"5d9992e5-db20-4681-932e-980855ff3006\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.665590 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-dljqk\" (UID: \"5d9992e5-db20-4681-932e-980855ff3006\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.665660 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-dljqk\" (UID: \"5d9992e5-db20-4681-932e-980855ff3006\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.665701 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6br9\" (UniqueName: \"kubernetes.io/projected/5d9992e5-db20-4681-932e-980855ff3006-kube-api-access-n6br9\") pod \"logging-edpm-deployment-openstack-edpm-ipam-dljqk\" (UID: \"5d9992e5-db20-4681-932e-980855ff3006\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.665719 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-dljqk\" (UID: \"5d9992e5-db20-4681-932e-980855ff3006\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.665872 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-dljqk\" (UID: \"5d9992e5-db20-4681-932e-980855ff3006\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.669908 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-dljqk\" (UID: \"5d9992e5-db20-4681-932e-980855ff3006\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.669908 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-dljqk\" (UID: \"5d9992e5-db20-4681-932e-980855ff3006\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.670843 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-dljqk\" (UID: \"5d9992e5-db20-4681-932e-980855ff3006\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.674596 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-dljqk\" (UID: \"5d9992e5-db20-4681-932e-980855ff3006\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.683564 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6br9\" (UniqueName: \"kubernetes.io/projected/5d9992e5-db20-4681-932e-980855ff3006-kube-api-access-n6br9\") pod \"logging-edpm-deployment-openstack-edpm-ipam-dljqk\" (UID: \"5d9992e5-db20-4681-932e-980855ff3006\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" Dec 12 16:45:42 crc kubenswrapper[4693]: I1212 16:45:42.790678 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" Dec 12 16:45:43 crc kubenswrapper[4693]: I1212 16:45:43.385002 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk"] Dec 12 16:45:44 crc kubenswrapper[4693]: I1212 16:45:44.369022 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" event={"ID":"5d9992e5-db20-4681-932e-980855ff3006","Type":"ContainerStarted","Data":"a0a441b78494f295ad40e8027ba5c86fe8e32eb93cd665c873fed9e984d39979"} Dec 12 16:45:46 crc kubenswrapper[4693]: I1212 16:45:46.407100 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" event={"ID":"5d9992e5-db20-4681-932e-980855ff3006","Type":"ContainerStarted","Data":"6f6e792ababfbbb3a92db44545bc599201176604a5317a0ccf6732e25d06fcb5"} Dec 12 16:45:46 crc kubenswrapper[4693]: I1212 16:45:46.434774 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" podStartSLOduration=2.375256823 podStartE2EDuration="4.434755603s" podCreationTimestamp="2025-12-12 16:45:42 +0000 UTC" firstStartedPulling="2025-12-12 16:45:43.398438224 +0000 UTC m=+3570.567077815" lastFinishedPulling="2025-12-12 16:45:45.457936984 +0000 UTC m=+3572.626576595" observedRunningTime="2025-12-12 16:45:46.425838227 +0000 UTC m=+3573.594477828" watchObservedRunningTime="2025-12-12 16:45:46.434755603 +0000 UTC m=+3573.603395204" Dec 12 16:46:01 crc kubenswrapper[4693]: I1212 16:46:01.035792 4693 scope.go:117] "RemoveContainer" containerID="ce3b52dd13275eea79a053e69d6d57564272aa4153888241fd3705c9db372eac" Dec 12 16:46:04 crc kubenswrapper[4693]: I1212 16:46:04.617304 4693 generic.go:334] "Generic (PLEG): container finished" podID="5d9992e5-db20-4681-932e-980855ff3006" containerID="6f6e792ababfbbb3a92db44545bc599201176604a5317a0ccf6732e25d06fcb5" exitCode=0 Dec 12 16:46:04 crc kubenswrapper[4693]: I1212 16:46:04.617714 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" event={"ID":"5d9992e5-db20-4681-932e-980855ff3006","Type":"ContainerDied","Data":"6f6e792ababfbbb3a92db44545bc599201176604a5317a0ccf6732e25d06fcb5"} Dec 12 16:46:06 crc kubenswrapper[4693]: I1212 16:46:06.156990 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" Dec 12 16:46:06 crc kubenswrapper[4693]: I1212 16:46:06.358156 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-ssh-key\") pod \"5d9992e5-db20-4681-932e-980855ff3006\" (UID: \"5d9992e5-db20-4681-932e-980855ff3006\") " Dec 12 16:46:06 crc kubenswrapper[4693]: I1212 16:46:06.358689 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-inventory\") pod \"5d9992e5-db20-4681-932e-980855ff3006\" (UID: \"5d9992e5-db20-4681-932e-980855ff3006\") " Dec 12 16:46:06 crc kubenswrapper[4693]: I1212 16:46:06.358816 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-logging-compute-config-data-1\") pod \"5d9992e5-db20-4681-932e-980855ff3006\" (UID: \"5d9992e5-db20-4681-932e-980855ff3006\") " Dec 12 16:46:06 crc kubenswrapper[4693]: I1212 16:46:06.359677 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6br9\" (UniqueName: \"kubernetes.io/projected/5d9992e5-db20-4681-932e-980855ff3006-kube-api-access-n6br9\") pod \"5d9992e5-db20-4681-932e-980855ff3006\" (UID: \"5d9992e5-db20-4681-932e-980855ff3006\") " Dec 12 16:46:06 crc kubenswrapper[4693]: I1212 16:46:06.359794 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-logging-compute-config-data-0\") pod \"5d9992e5-db20-4681-932e-980855ff3006\" (UID: \"5d9992e5-db20-4681-932e-980855ff3006\") " Dec 12 16:46:06 crc kubenswrapper[4693]: I1212 16:46:06.373099 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d9992e5-db20-4681-932e-980855ff3006-kube-api-access-n6br9" (OuterVolumeSpecName: "kube-api-access-n6br9") pod "5d9992e5-db20-4681-932e-980855ff3006" (UID: "5d9992e5-db20-4681-932e-980855ff3006"). InnerVolumeSpecName "kube-api-access-n6br9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:46:06 crc kubenswrapper[4693]: I1212 16:46:06.394945 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-inventory" (OuterVolumeSpecName: "inventory") pod "5d9992e5-db20-4681-932e-980855ff3006" (UID: "5d9992e5-db20-4681-932e-980855ff3006"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:46:06 crc kubenswrapper[4693]: I1212 16:46:06.396506 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "5d9992e5-db20-4681-932e-980855ff3006" (UID: "5d9992e5-db20-4681-932e-980855ff3006"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:46:06 crc kubenswrapper[4693]: I1212 16:46:06.402814 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "5d9992e5-db20-4681-932e-980855ff3006" (UID: "5d9992e5-db20-4681-932e-980855ff3006"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:46:06 crc kubenswrapper[4693]: I1212 16:46:06.406989 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5d9992e5-db20-4681-932e-980855ff3006" (UID: "5d9992e5-db20-4681-932e-980855ff3006"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 16:46:06 crc kubenswrapper[4693]: I1212 16:46:06.463157 4693 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 16:46:06 crc kubenswrapper[4693]: I1212 16:46:06.463207 4693 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 16:46:06 crc kubenswrapper[4693]: I1212 16:46:06.463219 4693 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 12 16:46:06 crc kubenswrapper[4693]: I1212 16:46:06.463234 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6br9\" (UniqueName: \"kubernetes.io/projected/5d9992e5-db20-4681-932e-980855ff3006-kube-api-access-n6br9\") on node \"crc\" DevicePath \"\"" Dec 12 16:46:06 crc kubenswrapper[4693]: I1212 16:46:06.463249 4693 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5d9992e5-db20-4681-932e-980855ff3006-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 12 16:46:06 crc kubenswrapper[4693]: I1212 16:46:06.647606 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" event={"ID":"5d9992e5-db20-4681-932e-980855ff3006","Type":"ContainerDied","Data":"a0a441b78494f295ad40e8027ba5c86fe8e32eb93cd665c873fed9e984d39979"} Dec 12 16:46:06 crc kubenswrapper[4693]: I1212 16:46:06.647649 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0a441b78494f295ad40e8027ba5c86fe8e32eb93cd665c873fed9e984d39979" Dec 12 16:46:06 crc kubenswrapper[4693]: I1212 16:46:06.647699 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-dljqk" Dec 12 16:46:42 crc kubenswrapper[4693]: I1212 16:46:42.530666 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:46:42 crc kubenswrapper[4693]: I1212 16:46:42.532838 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:47:12 crc kubenswrapper[4693]: I1212 16:47:12.530184 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:47:12 crc kubenswrapper[4693]: I1212 16:47:12.530852 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:47:42 crc kubenswrapper[4693]: I1212 16:47:42.536296 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:47:42 crc kubenswrapper[4693]: I1212 16:47:42.536989 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:47:42 crc kubenswrapper[4693]: I1212 16:47:42.537072 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 16:47:42 crc kubenswrapper[4693]: I1212 16:47:42.537797 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273"} pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:47:42 crc kubenswrapper[4693]: I1212 16:47:42.537881 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" containerID="cri-o://c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" gracePeriod=600 Dec 12 16:47:42 crc kubenswrapper[4693]: E1212 16:47:42.665960 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:47:42 crc kubenswrapper[4693]: I1212 16:47:42.904977 4693 generic.go:334] "Generic (PLEG): container finished" podID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" exitCode=0 Dec 12 16:47:42 crc kubenswrapper[4693]: I1212 16:47:42.905334 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerDied","Data":"c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273"} Dec 12 16:47:42 crc kubenswrapper[4693]: I1212 16:47:42.905552 4693 scope.go:117] "RemoveContainer" containerID="e62673de23f46e5e24bbb6ce63f6df95e4e0371a449f0b6d43f70818c45017fc" Dec 12 16:47:42 crc kubenswrapper[4693]: I1212 16:47:42.906261 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:47:42 crc kubenswrapper[4693]: E1212 16:47:42.906620 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:47:53 crc kubenswrapper[4693]: I1212 16:47:53.372003 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:47:53 crc kubenswrapper[4693]: E1212 16:47:53.372955 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:48:08 crc kubenswrapper[4693]: I1212 16:48:08.358225 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:48:08 crc kubenswrapper[4693]: E1212 16:48:08.359417 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:48:23 crc kubenswrapper[4693]: I1212 16:48:23.368533 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:48:23 crc kubenswrapper[4693]: E1212 16:48:23.369355 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:48:35 crc kubenswrapper[4693]: I1212 16:48:35.357542 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:48:35 crc kubenswrapper[4693]: E1212 16:48:35.358491 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:48:49 crc kubenswrapper[4693]: I1212 16:48:49.357821 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:48:49 crc kubenswrapper[4693]: E1212 16:48:49.358837 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:49:02 crc kubenswrapper[4693]: I1212 16:49:02.357143 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:49:02 crc kubenswrapper[4693]: E1212 16:49:02.358011 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:49:16 crc kubenswrapper[4693]: I1212 16:49:16.357721 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:49:16 crc kubenswrapper[4693]: E1212 16:49:16.358700 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:49:28 crc kubenswrapper[4693]: I1212 16:49:28.357943 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:49:28 crc kubenswrapper[4693]: E1212 16:49:28.359960 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:49:43 crc kubenswrapper[4693]: I1212 16:49:43.367893 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:49:43 crc kubenswrapper[4693]: E1212 16:49:43.369850 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:49:56 crc kubenswrapper[4693]: I1212 16:49:56.357731 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:49:56 crc kubenswrapper[4693]: E1212 16:49:56.358532 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:50:07 crc kubenswrapper[4693]: I1212 16:50:07.358741 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:50:07 crc kubenswrapper[4693]: E1212 16:50:07.359761 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:50:19 crc kubenswrapper[4693]: I1212 16:50:19.356983 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:50:19 crc kubenswrapper[4693]: E1212 16:50:19.357966 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:50:33 crc kubenswrapper[4693]: I1212 16:50:33.368190 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:50:33 crc kubenswrapper[4693]: E1212 16:50:33.369194 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:50:44 crc kubenswrapper[4693]: I1212 16:50:44.128523 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:50:44 crc kubenswrapper[4693]: E1212 16:50:44.129298 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:50:59 crc kubenswrapper[4693]: I1212 16:50:59.357380 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:50:59 crc kubenswrapper[4693]: E1212 16:50:59.358664 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:51:14 crc kubenswrapper[4693]: I1212 16:51:14.358105 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:51:14 crc kubenswrapper[4693]: E1212 16:51:14.359349 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:51:28 crc kubenswrapper[4693]: I1212 16:51:28.357622 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:51:28 crc kubenswrapper[4693]: E1212 16:51:28.359615 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:51:40 crc kubenswrapper[4693]: I1212 16:51:40.357648 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:51:40 crc kubenswrapper[4693]: E1212 16:51:40.358534 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:51:51 crc kubenswrapper[4693]: I1212 16:51:51.357159 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:51:51 crc kubenswrapper[4693]: E1212 16:51:51.357923 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:52:05 crc kubenswrapper[4693]: I1212 16:52:05.357827 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:52:05 crc kubenswrapper[4693]: E1212 16:52:05.358984 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:52:09 crc kubenswrapper[4693]: I1212 16:52:09.815242 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hfclg"] Dec 12 16:52:09 crc kubenswrapper[4693]: E1212 16:52:09.816535 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d9992e5-db20-4681-932e-980855ff3006" containerName="logging-edpm-deployment-openstack-edpm-ipam" Dec 12 16:52:09 crc kubenswrapper[4693]: I1212 16:52:09.816560 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d9992e5-db20-4681-932e-980855ff3006" containerName="logging-edpm-deployment-openstack-edpm-ipam" Dec 12 16:52:09 crc kubenswrapper[4693]: I1212 16:52:09.816877 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d9992e5-db20-4681-932e-980855ff3006" containerName="logging-edpm-deployment-openstack-edpm-ipam" Dec 12 16:52:09 crc kubenswrapper[4693]: I1212 16:52:09.819377 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hfclg" Dec 12 16:52:09 crc kubenswrapper[4693]: I1212 16:52:09.843668 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hfclg"] Dec 12 16:52:09 crc kubenswrapper[4693]: I1212 16:52:09.953135 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4mrv\" (UniqueName: \"kubernetes.io/projected/f015fb9e-b96d-4d47-a3a2-6487469a8681-kube-api-access-n4mrv\") pod \"redhat-operators-hfclg\" (UID: \"f015fb9e-b96d-4d47-a3a2-6487469a8681\") " pod="openshift-marketplace/redhat-operators-hfclg" Dec 12 16:52:09 crc kubenswrapper[4693]: I1212 16:52:09.953188 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f015fb9e-b96d-4d47-a3a2-6487469a8681-catalog-content\") pod \"redhat-operators-hfclg\" (UID: \"f015fb9e-b96d-4d47-a3a2-6487469a8681\") " pod="openshift-marketplace/redhat-operators-hfclg" Dec 12 16:52:09 crc kubenswrapper[4693]: I1212 16:52:09.953637 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f015fb9e-b96d-4d47-a3a2-6487469a8681-utilities\") pod \"redhat-operators-hfclg\" (UID: \"f015fb9e-b96d-4d47-a3a2-6487469a8681\") " pod="openshift-marketplace/redhat-operators-hfclg" Dec 12 16:52:10 crc kubenswrapper[4693]: I1212 16:52:10.055950 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f015fb9e-b96d-4d47-a3a2-6487469a8681-utilities\") pod \"redhat-operators-hfclg\" (UID: \"f015fb9e-b96d-4d47-a3a2-6487469a8681\") " pod="openshift-marketplace/redhat-operators-hfclg" Dec 12 16:52:10 crc kubenswrapper[4693]: I1212 16:52:10.056178 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4mrv\" (UniqueName: \"kubernetes.io/projected/f015fb9e-b96d-4d47-a3a2-6487469a8681-kube-api-access-n4mrv\") pod \"redhat-operators-hfclg\" (UID: \"f015fb9e-b96d-4d47-a3a2-6487469a8681\") " pod="openshift-marketplace/redhat-operators-hfclg" Dec 12 16:52:10 crc kubenswrapper[4693]: I1212 16:52:10.056218 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f015fb9e-b96d-4d47-a3a2-6487469a8681-catalog-content\") pod \"redhat-operators-hfclg\" (UID: \"f015fb9e-b96d-4d47-a3a2-6487469a8681\") " pod="openshift-marketplace/redhat-operators-hfclg" Dec 12 16:52:10 crc kubenswrapper[4693]: I1212 16:52:10.056565 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f015fb9e-b96d-4d47-a3a2-6487469a8681-utilities\") pod \"redhat-operators-hfclg\" (UID: \"f015fb9e-b96d-4d47-a3a2-6487469a8681\") " pod="openshift-marketplace/redhat-operators-hfclg" Dec 12 16:52:10 crc kubenswrapper[4693]: I1212 16:52:10.056885 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f015fb9e-b96d-4d47-a3a2-6487469a8681-catalog-content\") pod \"redhat-operators-hfclg\" (UID: \"f015fb9e-b96d-4d47-a3a2-6487469a8681\") " pod="openshift-marketplace/redhat-operators-hfclg" Dec 12 16:52:10 crc kubenswrapper[4693]: I1212 16:52:10.086114 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4mrv\" (UniqueName: \"kubernetes.io/projected/f015fb9e-b96d-4d47-a3a2-6487469a8681-kube-api-access-n4mrv\") pod \"redhat-operators-hfclg\" (UID: \"f015fb9e-b96d-4d47-a3a2-6487469a8681\") " pod="openshift-marketplace/redhat-operators-hfclg" Dec 12 16:52:10 crc kubenswrapper[4693]: I1212 16:52:10.155415 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hfclg" Dec 12 16:52:10 crc kubenswrapper[4693]: I1212 16:52:10.741456 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hfclg"] Dec 12 16:52:12 crc kubenswrapper[4693]: I1212 16:52:12.294625 4693 generic.go:334] "Generic (PLEG): container finished" podID="f015fb9e-b96d-4d47-a3a2-6487469a8681" containerID="4cc3826c0634a89a85032a1155d8eac1011a426dccdbb45719a777f8a9acb203" exitCode=0 Dec 12 16:52:12 crc kubenswrapper[4693]: I1212 16:52:12.294706 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfclg" event={"ID":"f015fb9e-b96d-4d47-a3a2-6487469a8681","Type":"ContainerDied","Data":"4cc3826c0634a89a85032a1155d8eac1011a426dccdbb45719a777f8a9acb203"} Dec 12 16:52:12 crc kubenswrapper[4693]: I1212 16:52:12.295108 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfclg" event={"ID":"f015fb9e-b96d-4d47-a3a2-6487469a8681","Type":"ContainerStarted","Data":"72602a2f6d634b61ad90a8a35fb9622225591374601567665a12b2cf63194093"} Dec 12 16:52:12 crc kubenswrapper[4693]: I1212 16:52:12.297875 4693 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 16:52:14 crc kubenswrapper[4693]: I1212 16:52:14.341854 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfclg" event={"ID":"f015fb9e-b96d-4d47-a3a2-6487469a8681","Type":"ContainerStarted","Data":"f04f672999e191b4d1930d3d87b339ce6395eb2b61a6167b2b03eff63a7e2683"} Dec 12 16:52:17 crc kubenswrapper[4693]: I1212 16:52:17.380306 4693 generic.go:334] "Generic (PLEG): container finished" podID="f015fb9e-b96d-4d47-a3a2-6487469a8681" containerID="f04f672999e191b4d1930d3d87b339ce6395eb2b61a6167b2b03eff63a7e2683" exitCode=0 Dec 12 16:52:17 crc kubenswrapper[4693]: I1212 16:52:17.380375 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfclg" event={"ID":"f015fb9e-b96d-4d47-a3a2-6487469a8681","Type":"ContainerDied","Data":"f04f672999e191b4d1930d3d87b339ce6395eb2b61a6167b2b03eff63a7e2683"} Dec 12 16:52:18 crc kubenswrapper[4693]: I1212 16:52:18.397854 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfclg" event={"ID":"f015fb9e-b96d-4d47-a3a2-6487469a8681","Type":"ContainerStarted","Data":"bd66cb4ac21ea8f83289e6bb842405a62d5cc810e0c1a87ea4af3dd5df861429"} Dec 12 16:52:18 crc kubenswrapper[4693]: I1212 16:52:18.456896 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hfclg" podStartSLOduration=3.927346972 podStartE2EDuration="9.456872901s" podCreationTimestamp="2025-12-12 16:52:09 +0000 UTC" firstStartedPulling="2025-12-12 16:52:12.297593543 +0000 UTC m=+3959.466233154" lastFinishedPulling="2025-12-12 16:52:17.827119442 +0000 UTC m=+3964.995759083" observedRunningTime="2025-12-12 16:52:18.432886358 +0000 UTC m=+3965.601525959" watchObservedRunningTime="2025-12-12 16:52:18.456872901 +0000 UTC m=+3965.625512502" Dec 12 16:52:20 crc kubenswrapper[4693]: I1212 16:52:20.156098 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hfclg" Dec 12 16:52:20 crc kubenswrapper[4693]: I1212 16:52:20.156478 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hfclg" Dec 12 16:52:20 crc kubenswrapper[4693]: I1212 16:52:20.357266 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:52:20 crc kubenswrapper[4693]: E1212 16:52:20.357888 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:52:21 crc kubenswrapper[4693]: I1212 16:52:21.227816 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hfclg" podUID="f015fb9e-b96d-4d47-a3a2-6487469a8681" containerName="registry-server" probeResult="failure" output=< Dec 12 16:52:21 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 16:52:21 crc kubenswrapper[4693]: > Dec 12 16:52:30 crc kubenswrapper[4693]: I1212 16:52:30.213853 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hfclg" Dec 12 16:52:30 crc kubenswrapper[4693]: I1212 16:52:30.272815 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hfclg" Dec 12 16:52:30 crc kubenswrapper[4693]: I1212 16:52:30.452337 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hfclg"] Dec 12 16:52:31 crc kubenswrapper[4693]: I1212 16:52:31.568708 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hfclg" podUID="f015fb9e-b96d-4d47-a3a2-6487469a8681" containerName="registry-server" containerID="cri-o://bd66cb4ac21ea8f83289e6bb842405a62d5cc810e0c1a87ea4af3dd5df861429" gracePeriod=2 Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.195996 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hfclg" Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.334188 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f015fb9e-b96d-4d47-a3a2-6487469a8681-catalog-content\") pod \"f015fb9e-b96d-4d47-a3a2-6487469a8681\" (UID: \"f015fb9e-b96d-4d47-a3a2-6487469a8681\") " Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.334363 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f015fb9e-b96d-4d47-a3a2-6487469a8681-utilities\") pod \"f015fb9e-b96d-4d47-a3a2-6487469a8681\" (UID: \"f015fb9e-b96d-4d47-a3a2-6487469a8681\") " Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.334407 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4mrv\" (UniqueName: \"kubernetes.io/projected/f015fb9e-b96d-4d47-a3a2-6487469a8681-kube-api-access-n4mrv\") pod \"f015fb9e-b96d-4d47-a3a2-6487469a8681\" (UID: \"f015fb9e-b96d-4d47-a3a2-6487469a8681\") " Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.336930 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f015fb9e-b96d-4d47-a3a2-6487469a8681-utilities" (OuterVolumeSpecName: "utilities") pod "f015fb9e-b96d-4d47-a3a2-6487469a8681" (UID: "f015fb9e-b96d-4d47-a3a2-6487469a8681"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.342338 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f015fb9e-b96d-4d47-a3a2-6487469a8681-kube-api-access-n4mrv" (OuterVolumeSpecName: "kube-api-access-n4mrv") pod "f015fb9e-b96d-4d47-a3a2-6487469a8681" (UID: "f015fb9e-b96d-4d47-a3a2-6487469a8681"). InnerVolumeSpecName "kube-api-access-n4mrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.357414 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:52:32 crc kubenswrapper[4693]: E1212 16:52:32.357788 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.438929 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f015fb9e-b96d-4d47-a3a2-6487469a8681-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.438979 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4mrv\" (UniqueName: \"kubernetes.io/projected/f015fb9e-b96d-4d47-a3a2-6487469a8681-kube-api-access-n4mrv\") on node \"crc\" DevicePath \"\"" Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.453431 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f015fb9e-b96d-4d47-a3a2-6487469a8681-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f015fb9e-b96d-4d47-a3a2-6487469a8681" (UID: "f015fb9e-b96d-4d47-a3a2-6487469a8681"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.541966 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f015fb9e-b96d-4d47-a3a2-6487469a8681-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.582636 4693 generic.go:334] "Generic (PLEG): container finished" podID="f015fb9e-b96d-4d47-a3a2-6487469a8681" containerID="bd66cb4ac21ea8f83289e6bb842405a62d5cc810e0c1a87ea4af3dd5df861429" exitCode=0 Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.582688 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfclg" event={"ID":"f015fb9e-b96d-4d47-a3a2-6487469a8681","Type":"ContainerDied","Data":"bd66cb4ac21ea8f83289e6bb842405a62d5cc810e0c1a87ea4af3dd5df861429"} Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.582701 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hfclg" Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.582718 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfclg" event={"ID":"f015fb9e-b96d-4d47-a3a2-6487469a8681","Type":"ContainerDied","Data":"72602a2f6d634b61ad90a8a35fb9622225591374601567665a12b2cf63194093"} Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.582751 4693 scope.go:117] "RemoveContainer" containerID="bd66cb4ac21ea8f83289e6bb842405a62d5cc810e0c1a87ea4af3dd5df861429" Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.623255 4693 scope.go:117] "RemoveContainer" containerID="f04f672999e191b4d1930d3d87b339ce6395eb2b61a6167b2b03eff63a7e2683" Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.631233 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hfclg"] Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.649504 4693 scope.go:117] "RemoveContainer" containerID="4cc3826c0634a89a85032a1155d8eac1011a426dccdbb45719a777f8a9acb203" Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.657110 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hfclg"] Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.727422 4693 scope.go:117] "RemoveContainer" containerID="bd66cb4ac21ea8f83289e6bb842405a62d5cc810e0c1a87ea4af3dd5df861429" Dec 12 16:52:32 crc kubenswrapper[4693]: E1212 16:52:32.728344 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd66cb4ac21ea8f83289e6bb842405a62d5cc810e0c1a87ea4af3dd5df861429\": container with ID starting with bd66cb4ac21ea8f83289e6bb842405a62d5cc810e0c1a87ea4af3dd5df861429 not found: ID does not exist" containerID="bd66cb4ac21ea8f83289e6bb842405a62d5cc810e0c1a87ea4af3dd5df861429" Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.728394 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd66cb4ac21ea8f83289e6bb842405a62d5cc810e0c1a87ea4af3dd5df861429"} err="failed to get container status \"bd66cb4ac21ea8f83289e6bb842405a62d5cc810e0c1a87ea4af3dd5df861429\": rpc error: code = NotFound desc = could not find container \"bd66cb4ac21ea8f83289e6bb842405a62d5cc810e0c1a87ea4af3dd5df861429\": container with ID starting with bd66cb4ac21ea8f83289e6bb842405a62d5cc810e0c1a87ea4af3dd5df861429 not found: ID does not exist" Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.728421 4693 scope.go:117] "RemoveContainer" containerID="f04f672999e191b4d1930d3d87b339ce6395eb2b61a6167b2b03eff63a7e2683" Dec 12 16:52:32 crc kubenswrapper[4693]: E1212 16:52:32.728700 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f04f672999e191b4d1930d3d87b339ce6395eb2b61a6167b2b03eff63a7e2683\": container with ID starting with f04f672999e191b4d1930d3d87b339ce6395eb2b61a6167b2b03eff63a7e2683 not found: ID does not exist" containerID="f04f672999e191b4d1930d3d87b339ce6395eb2b61a6167b2b03eff63a7e2683" Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.728734 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f04f672999e191b4d1930d3d87b339ce6395eb2b61a6167b2b03eff63a7e2683"} err="failed to get container status \"f04f672999e191b4d1930d3d87b339ce6395eb2b61a6167b2b03eff63a7e2683\": rpc error: code = NotFound desc = could not find container \"f04f672999e191b4d1930d3d87b339ce6395eb2b61a6167b2b03eff63a7e2683\": container with ID starting with f04f672999e191b4d1930d3d87b339ce6395eb2b61a6167b2b03eff63a7e2683 not found: ID does not exist" Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.728752 4693 scope.go:117] "RemoveContainer" containerID="4cc3826c0634a89a85032a1155d8eac1011a426dccdbb45719a777f8a9acb203" Dec 12 16:52:32 crc kubenswrapper[4693]: E1212 16:52:32.729029 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cc3826c0634a89a85032a1155d8eac1011a426dccdbb45719a777f8a9acb203\": container with ID starting with 4cc3826c0634a89a85032a1155d8eac1011a426dccdbb45719a777f8a9acb203 not found: ID does not exist" containerID="4cc3826c0634a89a85032a1155d8eac1011a426dccdbb45719a777f8a9acb203" Dec 12 16:52:32 crc kubenswrapper[4693]: I1212 16:52:32.729065 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cc3826c0634a89a85032a1155d8eac1011a426dccdbb45719a777f8a9acb203"} err="failed to get container status \"4cc3826c0634a89a85032a1155d8eac1011a426dccdbb45719a777f8a9acb203\": rpc error: code = NotFound desc = could not find container \"4cc3826c0634a89a85032a1155d8eac1011a426dccdbb45719a777f8a9acb203\": container with ID starting with 4cc3826c0634a89a85032a1155d8eac1011a426dccdbb45719a777f8a9acb203 not found: ID does not exist" Dec 12 16:52:33 crc kubenswrapper[4693]: I1212 16:52:33.375298 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f015fb9e-b96d-4d47-a3a2-6487469a8681" path="/var/lib/kubelet/pods/f015fb9e-b96d-4d47-a3a2-6487469a8681/volumes" Dec 12 16:52:47 crc kubenswrapper[4693]: I1212 16:52:47.357194 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:52:47 crc kubenswrapper[4693]: I1212 16:52:47.753041 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerStarted","Data":"c4bc5886d2bac521b1340d46b8799ca6fd8658509c0ec214ab3af9c4d4338c79"} Dec 12 16:52:51 crc kubenswrapper[4693]: I1212 16:52:51.914287 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-66c8w"] Dec 12 16:52:51 crc kubenswrapper[4693]: E1212 16:52:51.915724 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f015fb9e-b96d-4d47-a3a2-6487469a8681" containerName="extract-content" Dec 12 16:52:51 crc kubenswrapper[4693]: I1212 16:52:51.915743 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f015fb9e-b96d-4d47-a3a2-6487469a8681" containerName="extract-content" Dec 12 16:52:51 crc kubenswrapper[4693]: E1212 16:52:51.915761 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f015fb9e-b96d-4d47-a3a2-6487469a8681" containerName="extract-utilities" Dec 12 16:52:51 crc kubenswrapper[4693]: I1212 16:52:51.915770 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f015fb9e-b96d-4d47-a3a2-6487469a8681" containerName="extract-utilities" Dec 12 16:52:51 crc kubenswrapper[4693]: E1212 16:52:51.915822 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f015fb9e-b96d-4d47-a3a2-6487469a8681" containerName="registry-server" Dec 12 16:52:51 crc kubenswrapper[4693]: I1212 16:52:51.915831 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f015fb9e-b96d-4d47-a3a2-6487469a8681" containerName="registry-server" Dec 12 16:52:51 crc kubenswrapper[4693]: I1212 16:52:51.916189 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="f015fb9e-b96d-4d47-a3a2-6487469a8681" containerName="registry-server" Dec 12 16:52:51 crc kubenswrapper[4693]: I1212 16:52:51.918585 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-66c8w" Dec 12 16:52:51 crc kubenswrapper[4693]: I1212 16:52:51.947648 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-66c8w"] Dec 12 16:52:52 crc kubenswrapper[4693]: I1212 16:52:52.070741 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6-catalog-content\") pod \"redhat-marketplace-66c8w\" (UID: \"1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6\") " pod="openshift-marketplace/redhat-marketplace-66c8w" Dec 12 16:52:52 crc kubenswrapper[4693]: I1212 16:52:52.070849 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4x2d\" (UniqueName: \"kubernetes.io/projected/1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6-kube-api-access-t4x2d\") pod \"redhat-marketplace-66c8w\" (UID: \"1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6\") " pod="openshift-marketplace/redhat-marketplace-66c8w" Dec 12 16:52:52 crc kubenswrapper[4693]: I1212 16:52:52.070915 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6-utilities\") pod \"redhat-marketplace-66c8w\" (UID: \"1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6\") " pod="openshift-marketplace/redhat-marketplace-66c8w" Dec 12 16:52:52 crc kubenswrapper[4693]: I1212 16:52:52.173417 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4x2d\" (UniqueName: \"kubernetes.io/projected/1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6-kube-api-access-t4x2d\") pod \"redhat-marketplace-66c8w\" (UID: \"1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6\") " pod="openshift-marketplace/redhat-marketplace-66c8w" Dec 12 16:52:52 crc kubenswrapper[4693]: I1212 16:52:52.173550 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6-utilities\") pod \"redhat-marketplace-66c8w\" (UID: \"1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6\") " pod="openshift-marketplace/redhat-marketplace-66c8w" Dec 12 16:52:52 crc kubenswrapper[4693]: I1212 16:52:52.173697 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6-catalog-content\") pod \"redhat-marketplace-66c8w\" (UID: \"1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6\") " pod="openshift-marketplace/redhat-marketplace-66c8w" Dec 12 16:52:52 crc kubenswrapper[4693]: I1212 16:52:52.174305 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6-utilities\") pod \"redhat-marketplace-66c8w\" (UID: \"1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6\") " pod="openshift-marketplace/redhat-marketplace-66c8w" Dec 12 16:52:52 crc kubenswrapper[4693]: I1212 16:52:52.174392 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6-catalog-content\") pod \"redhat-marketplace-66c8w\" (UID: \"1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6\") " pod="openshift-marketplace/redhat-marketplace-66c8w" Dec 12 16:52:52 crc kubenswrapper[4693]: I1212 16:52:52.204527 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4x2d\" (UniqueName: \"kubernetes.io/projected/1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6-kube-api-access-t4x2d\") pod \"redhat-marketplace-66c8w\" (UID: \"1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6\") " pod="openshift-marketplace/redhat-marketplace-66c8w" Dec 12 16:52:52 crc kubenswrapper[4693]: I1212 16:52:52.268004 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-66c8w" Dec 12 16:52:52 crc kubenswrapper[4693]: I1212 16:52:52.959531 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-66c8w"] Dec 12 16:52:52 crc kubenswrapper[4693]: W1212 16:52:52.969960 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ee6943d_72ed_4a11_9fe6_0c2056e8bfd6.slice/crio-b6001d00bdf215afedff03a1a7521db7c491657021016e06fa576170d29a1348 WatchSource:0}: Error finding container b6001d00bdf215afedff03a1a7521db7c491657021016e06fa576170d29a1348: Status 404 returned error can't find the container with id b6001d00bdf215afedff03a1a7521db7c491657021016e06fa576170d29a1348 Dec 12 16:52:53 crc kubenswrapper[4693]: I1212 16:52:53.823321 4693 generic.go:334] "Generic (PLEG): container finished" podID="1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6" containerID="5c3fd9ba772b75827472876ee450661bf4a4f0619bbdfc56a62e30bf3172aa94" exitCode=0 Dec 12 16:52:53 crc kubenswrapper[4693]: I1212 16:52:53.823391 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66c8w" event={"ID":"1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6","Type":"ContainerDied","Data":"5c3fd9ba772b75827472876ee450661bf4a4f0619bbdfc56a62e30bf3172aa94"} Dec 12 16:52:53 crc kubenswrapper[4693]: I1212 16:52:53.823886 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66c8w" event={"ID":"1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6","Type":"ContainerStarted","Data":"b6001d00bdf215afedff03a1a7521db7c491657021016e06fa576170d29a1348"} Dec 12 16:52:55 crc kubenswrapper[4693]: I1212 16:52:55.860345 4693 generic.go:334] "Generic (PLEG): container finished" podID="1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6" containerID="3bf9383d0dd83520c935f597adaabea18f2aee1e564b7cbb6ef28e9c46c53f49" exitCode=0 Dec 12 16:52:55 crc kubenswrapper[4693]: I1212 16:52:55.860586 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66c8w" event={"ID":"1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6","Type":"ContainerDied","Data":"3bf9383d0dd83520c935f597adaabea18f2aee1e564b7cbb6ef28e9c46c53f49"} Dec 12 16:52:57 crc kubenswrapper[4693]: I1212 16:52:57.899264 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66c8w" event={"ID":"1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6","Type":"ContainerStarted","Data":"f51e29e2f0c91aa43f3e83f3fea14019af018cf3466cb55293050246f9da00b6"} Dec 12 16:52:57 crc kubenswrapper[4693]: I1212 16:52:57.943824 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-66c8w" podStartSLOduration=4.375344261 podStartE2EDuration="6.943800391s" podCreationTimestamp="2025-12-12 16:52:51 +0000 UTC" firstStartedPulling="2025-12-12 16:52:53.82566787 +0000 UTC m=+4000.994307471" lastFinishedPulling="2025-12-12 16:52:56.394124 +0000 UTC m=+4003.562763601" observedRunningTime="2025-12-12 16:52:57.919966262 +0000 UTC m=+4005.088605863" watchObservedRunningTime="2025-12-12 16:52:57.943800391 +0000 UTC m=+4005.112439992" Dec 12 16:53:02 crc kubenswrapper[4693]: I1212 16:53:02.268548 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-66c8w" Dec 12 16:53:02 crc kubenswrapper[4693]: I1212 16:53:02.269147 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-66c8w" Dec 12 16:53:02 crc kubenswrapper[4693]: I1212 16:53:02.334915 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-66c8w" Dec 12 16:53:03 crc kubenswrapper[4693]: I1212 16:53:03.105087 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-66c8w" Dec 12 16:53:03 crc kubenswrapper[4693]: I1212 16:53:03.196752 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-66c8w"] Dec 12 16:53:05 crc kubenswrapper[4693]: I1212 16:53:05.021473 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-66c8w" podUID="1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6" containerName="registry-server" containerID="cri-o://f51e29e2f0c91aa43f3e83f3fea14019af018cf3466cb55293050246f9da00b6" gracePeriod=2 Dec 12 16:53:05 crc kubenswrapper[4693]: I1212 16:53:05.645849 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-66c8w" Dec 12 16:53:05 crc kubenswrapper[4693]: I1212 16:53:05.752405 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4x2d\" (UniqueName: \"kubernetes.io/projected/1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6-kube-api-access-t4x2d\") pod \"1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6\" (UID: \"1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6\") " Dec 12 16:53:05 crc kubenswrapper[4693]: I1212 16:53:05.752450 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6-utilities\") pod \"1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6\" (UID: \"1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6\") " Dec 12 16:53:05 crc kubenswrapper[4693]: I1212 16:53:05.752479 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6-catalog-content\") pod \"1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6\" (UID: \"1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6\") " Dec 12 16:53:05 crc kubenswrapper[4693]: I1212 16:53:05.753772 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6-utilities" (OuterVolumeSpecName: "utilities") pod "1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6" (UID: "1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:53:05 crc kubenswrapper[4693]: I1212 16:53:05.757499 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6-kube-api-access-t4x2d" (OuterVolumeSpecName: "kube-api-access-t4x2d") pod "1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6" (UID: "1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6"). InnerVolumeSpecName "kube-api-access-t4x2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:53:05 crc kubenswrapper[4693]: I1212 16:53:05.776162 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6" (UID: "1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:53:05 crc kubenswrapper[4693]: I1212 16:53:05.856583 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4x2d\" (UniqueName: \"kubernetes.io/projected/1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6-kube-api-access-t4x2d\") on node \"crc\" DevicePath \"\"" Dec 12 16:53:05 crc kubenswrapper[4693]: I1212 16:53:05.856621 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:53:05 crc kubenswrapper[4693]: I1212 16:53:05.856637 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:53:06 crc kubenswrapper[4693]: I1212 16:53:06.037756 4693 generic.go:334] "Generic (PLEG): container finished" podID="1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6" containerID="f51e29e2f0c91aa43f3e83f3fea14019af018cf3466cb55293050246f9da00b6" exitCode=0 Dec 12 16:53:06 crc kubenswrapper[4693]: I1212 16:53:06.037880 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-66c8w" Dec 12 16:53:06 crc kubenswrapper[4693]: I1212 16:53:06.037906 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66c8w" event={"ID":"1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6","Type":"ContainerDied","Data":"f51e29e2f0c91aa43f3e83f3fea14019af018cf3466cb55293050246f9da00b6"} Dec 12 16:53:06 crc kubenswrapper[4693]: I1212 16:53:06.039416 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66c8w" event={"ID":"1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6","Type":"ContainerDied","Data":"b6001d00bdf215afedff03a1a7521db7c491657021016e06fa576170d29a1348"} Dec 12 16:53:06 crc kubenswrapper[4693]: I1212 16:53:06.039478 4693 scope.go:117] "RemoveContainer" containerID="f51e29e2f0c91aa43f3e83f3fea14019af018cf3466cb55293050246f9da00b6" Dec 12 16:53:06 crc kubenswrapper[4693]: I1212 16:53:06.087121 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-66c8w"] Dec 12 16:53:06 crc kubenswrapper[4693]: I1212 16:53:06.087286 4693 scope.go:117] "RemoveContainer" containerID="3bf9383d0dd83520c935f597adaabea18f2aee1e564b7cbb6ef28e9c46c53f49" Dec 12 16:53:06 crc kubenswrapper[4693]: I1212 16:53:06.109363 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-66c8w"] Dec 12 16:53:06 crc kubenswrapper[4693]: I1212 16:53:06.110177 4693 scope.go:117] "RemoveContainer" containerID="5c3fd9ba772b75827472876ee450661bf4a4f0619bbdfc56a62e30bf3172aa94" Dec 12 16:53:06 crc kubenswrapper[4693]: I1212 16:53:06.163719 4693 scope.go:117] "RemoveContainer" containerID="f51e29e2f0c91aa43f3e83f3fea14019af018cf3466cb55293050246f9da00b6" Dec 12 16:53:06 crc kubenswrapper[4693]: E1212 16:53:06.164151 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f51e29e2f0c91aa43f3e83f3fea14019af018cf3466cb55293050246f9da00b6\": container with ID starting with f51e29e2f0c91aa43f3e83f3fea14019af018cf3466cb55293050246f9da00b6 not found: ID does not exist" containerID="f51e29e2f0c91aa43f3e83f3fea14019af018cf3466cb55293050246f9da00b6" Dec 12 16:53:06 crc kubenswrapper[4693]: I1212 16:53:06.164219 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f51e29e2f0c91aa43f3e83f3fea14019af018cf3466cb55293050246f9da00b6"} err="failed to get container status \"f51e29e2f0c91aa43f3e83f3fea14019af018cf3466cb55293050246f9da00b6\": rpc error: code = NotFound desc = could not find container \"f51e29e2f0c91aa43f3e83f3fea14019af018cf3466cb55293050246f9da00b6\": container with ID starting with f51e29e2f0c91aa43f3e83f3fea14019af018cf3466cb55293050246f9da00b6 not found: ID does not exist" Dec 12 16:53:06 crc kubenswrapper[4693]: I1212 16:53:06.164261 4693 scope.go:117] "RemoveContainer" containerID="3bf9383d0dd83520c935f597adaabea18f2aee1e564b7cbb6ef28e9c46c53f49" Dec 12 16:53:06 crc kubenswrapper[4693]: E1212 16:53:06.164600 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bf9383d0dd83520c935f597adaabea18f2aee1e564b7cbb6ef28e9c46c53f49\": container with ID starting with 3bf9383d0dd83520c935f597adaabea18f2aee1e564b7cbb6ef28e9c46c53f49 not found: ID does not exist" containerID="3bf9383d0dd83520c935f597adaabea18f2aee1e564b7cbb6ef28e9c46c53f49" Dec 12 16:53:06 crc kubenswrapper[4693]: I1212 16:53:06.164633 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bf9383d0dd83520c935f597adaabea18f2aee1e564b7cbb6ef28e9c46c53f49"} err="failed to get container status \"3bf9383d0dd83520c935f597adaabea18f2aee1e564b7cbb6ef28e9c46c53f49\": rpc error: code = NotFound desc = could not find container \"3bf9383d0dd83520c935f597adaabea18f2aee1e564b7cbb6ef28e9c46c53f49\": container with ID starting with 3bf9383d0dd83520c935f597adaabea18f2aee1e564b7cbb6ef28e9c46c53f49 not found: ID does not exist" Dec 12 16:53:06 crc kubenswrapper[4693]: I1212 16:53:06.164655 4693 scope.go:117] "RemoveContainer" containerID="5c3fd9ba772b75827472876ee450661bf4a4f0619bbdfc56a62e30bf3172aa94" Dec 12 16:53:06 crc kubenswrapper[4693]: E1212 16:53:06.165109 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c3fd9ba772b75827472876ee450661bf4a4f0619bbdfc56a62e30bf3172aa94\": container with ID starting with 5c3fd9ba772b75827472876ee450661bf4a4f0619bbdfc56a62e30bf3172aa94 not found: ID does not exist" containerID="5c3fd9ba772b75827472876ee450661bf4a4f0619bbdfc56a62e30bf3172aa94" Dec 12 16:53:06 crc kubenswrapper[4693]: I1212 16:53:06.165339 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c3fd9ba772b75827472876ee450661bf4a4f0619bbdfc56a62e30bf3172aa94"} err="failed to get container status \"5c3fd9ba772b75827472876ee450661bf4a4f0619bbdfc56a62e30bf3172aa94\": rpc error: code = NotFound desc = could not find container \"5c3fd9ba772b75827472876ee450661bf4a4f0619bbdfc56a62e30bf3172aa94\": container with ID starting with 5c3fd9ba772b75827472876ee450661bf4a4f0619bbdfc56a62e30bf3172aa94 not found: ID does not exist" Dec 12 16:53:07 crc kubenswrapper[4693]: I1212 16:53:07.372906 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6" path="/var/lib/kubelet/pods/1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6/volumes" Dec 12 16:53:30 crc kubenswrapper[4693]: I1212 16:53:30.642899 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-42bmp"] Dec 12 16:53:30 crc kubenswrapper[4693]: E1212 16:53:30.648422 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6" containerName="registry-server" Dec 12 16:53:30 crc kubenswrapper[4693]: I1212 16:53:30.648440 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6" containerName="registry-server" Dec 12 16:53:30 crc kubenswrapper[4693]: E1212 16:53:30.648471 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6" containerName="extract-content" Dec 12 16:53:30 crc kubenswrapper[4693]: I1212 16:53:30.648477 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6" containerName="extract-content" Dec 12 16:53:30 crc kubenswrapper[4693]: E1212 16:53:30.648531 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6" containerName="extract-utilities" Dec 12 16:53:30 crc kubenswrapper[4693]: I1212 16:53:30.648537 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6" containerName="extract-utilities" Dec 12 16:53:30 crc kubenswrapper[4693]: I1212 16:53:30.648981 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ee6943d-72ed-4a11-9fe6-0c2056e8bfd6" containerName="registry-server" Dec 12 16:53:30 crc kubenswrapper[4693]: I1212 16:53:30.669801 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-42bmp"] Dec 12 16:53:30 crc kubenswrapper[4693]: I1212 16:53:30.670340 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-42bmp" Dec 12 16:53:30 crc kubenswrapper[4693]: I1212 16:53:30.820545 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b0cccd-5975-430f-9566-073f6f9606b0-catalog-content\") pod \"community-operators-42bmp\" (UID: \"23b0cccd-5975-430f-9566-073f6f9606b0\") " pod="openshift-marketplace/community-operators-42bmp" Dec 12 16:53:30 crc kubenswrapper[4693]: I1212 16:53:30.820627 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b0cccd-5975-430f-9566-073f6f9606b0-utilities\") pod \"community-operators-42bmp\" (UID: \"23b0cccd-5975-430f-9566-073f6f9606b0\") " pod="openshift-marketplace/community-operators-42bmp" Dec 12 16:53:30 crc kubenswrapper[4693]: I1212 16:53:30.820836 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k22mt\" (UniqueName: \"kubernetes.io/projected/23b0cccd-5975-430f-9566-073f6f9606b0-kube-api-access-k22mt\") pod \"community-operators-42bmp\" (UID: \"23b0cccd-5975-430f-9566-073f6f9606b0\") " pod="openshift-marketplace/community-operators-42bmp" Dec 12 16:53:30 crc kubenswrapper[4693]: I1212 16:53:30.923338 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k22mt\" (UniqueName: \"kubernetes.io/projected/23b0cccd-5975-430f-9566-073f6f9606b0-kube-api-access-k22mt\") pod \"community-operators-42bmp\" (UID: \"23b0cccd-5975-430f-9566-073f6f9606b0\") " pod="openshift-marketplace/community-operators-42bmp" Dec 12 16:53:30 crc kubenswrapper[4693]: I1212 16:53:30.923482 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b0cccd-5975-430f-9566-073f6f9606b0-catalog-content\") pod \"community-operators-42bmp\" (UID: \"23b0cccd-5975-430f-9566-073f6f9606b0\") " pod="openshift-marketplace/community-operators-42bmp" Dec 12 16:53:30 crc kubenswrapper[4693]: I1212 16:53:30.923537 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b0cccd-5975-430f-9566-073f6f9606b0-utilities\") pod \"community-operators-42bmp\" (UID: \"23b0cccd-5975-430f-9566-073f6f9606b0\") " pod="openshift-marketplace/community-operators-42bmp" Dec 12 16:53:30 crc kubenswrapper[4693]: I1212 16:53:30.924079 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b0cccd-5975-430f-9566-073f6f9606b0-catalog-content\") pod \"community-operators-42bmp\" (UID: \"23b0cccd-5975-430f-9566-073f6f9606b0\") " pod="openshift-marketplace/community-operators-42bmp" Dec 12 16:53:30 crc kubenswrapper[4693]: I1212 16:53:30.924142 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b0cccd-5975-430f-9566-073f6f9606b0-utilities\") pod \"community-operators-42bmp\" (UID: \"23b0cccd-5975-430f-9566-073f6f9606b0\") " pod="openshift-marketplace/community-operators-42bmp" Dec 12 16:53:30 crc kubenswrapper[4693]: I1212 16:53:30.951195 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k22mt\" (UniqueName: \"kubernetes.io/projected/23b0cccd-5975-430f-9566-073f6f9606b0-kube-api-access-k22mt\") pod \"community-operators-42bmp\" (UID: \"23b0cccd-5975-430f-9566-073f6f9606b0\") " pod="openshift-marketplace/community-operators-42bmp" Dec 12 16:53:31 crc kubenswrapper[4693]: I1212 16:53:31.011114 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-42bmp" Dec 12 16:53:31 crc kubenswrapper[4693]: I1212 16:53:31.545448 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-42bmp"] Dec 12 16:53:32 crc kubenswrapper[4693]: I1212 16:53:32.341753 4693 generic.go:334] "Generic (PLEG): container finished" podID="23b0cccd-5975-430f-9566-073f6f9606b0" containerID="96570d00f3942ebe87ce4480ca8b399f22ce8be290aa14947e38ce218951aa9f" exitCode=0 Dec 12 16:53:32 crc kubenswrapper[4693]: I1212 16:53:32.342534 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42bmp" event={"ID":"23b0cccd-5975-430f-9566-073f6f9606b0","Type":"ContainerDied","Data":"96570d00f3942ebe87ce4480ca8b399f22ce8be290aa14947e38ce218951aa9f"} Dec 12 16:53:32 crc kubenswrapper[4693]: I1212 16:53:32.342576 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42bmp" event={"ID":"23b0cccd-5975-430f-9566-073f6f9606b0","Type":"ContainerStarted","Data":"0800e1a93df9ac25c0b460faa87a8fcbad3888075e50a71c6e71be8a767e7283"} Dec 12 16:53:33 crc kubenswrapper[4693]: I1212 16:53:33.389621 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42bmp" event={"ID":"23b0cccd-5975-430f-9566-073f6f9606b0","Type":"ContainerStarted","Data":"c3f6f6af67f6eabf9b3545cf3945f79ae99c8283a75b8ed9de136838a2569a6a"} Dec 12 16:53:34 crc kubenswrapper[4693]: I1212 16:53:34.403950 4693 generic.go:334] "Generic (PLEG): container finished" podID="23b0cccd-5975-430f-9566-073f6f9606b0" containerID="c3f6f6af67f6eabf9b3545cf3945f79ae99c8283a75b8ed9de136838a2569a6a" exitCode=0 Dec 12 16:53:34 crc kubenswrapper[4693]: I1212 16:53:34.404105 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42bmp" event={"ID":"23b0cccd-5975-430f-9566-073f6f9606b0","Type":"ContainerDied","Data":"c3f6f6af67f6eabf9b3545cf3945f79ae99c8283a75b8ed9de136838a2569a6a"} Dec 12 16:53:35 crc kubenswrapper[4693]: I1212 16:53:35.419145 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42bmp" event={"ID":"23b0cccd-5975-430f-9566-073f6f9606b0","Type":"ContainerStarted","Data":"6d53538410cb78ce716eee7a8c0223f93fec363eac481688bba27f81f4520b6c"} Dec 12 16:53:35 crc kubenswrapper[4693]: I1212 16:53:35.449677 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-42bmp" podStartSLOduration=2.643008472 podStartE2EDuration="5.449655893s" podCreationTimestamp="2025-12-12 16:53:30 +0000 UTC" firstStartedPulling="2025-12-12 16:53:32.346248703 +0000 UTC m=+4039.514888344" lastFinishedPulling="2025-12-12 16:53:35.152896134 +0000 UTC m=+4042.321535765" observedRunningTime="2025-12-12 16:53:35.439344047 +0000 UTC m=+4042.607983648" watchObservedRunningTime="2025-12-12 16:53:35.449655893 +0000 UTC m=+4042.618295504" Dec 12 16:53:41 crc kubenswrapper[4693]: I1212 16:53:41.012063 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-42bmp" Dec 12 16:53:41 crc kubenswrapper[4693]: I1212 16:53:41.012537 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-42bmp" Dec 12 16:53:41 crc kubenswrapper[4693]: I1212 16:53:41.073297 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-42bmp" Dec 12 16:53:41 crc kubenswrapper[4693]: I1212 16:53:41.546487 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-42bmp" Dec 12 16:53:41 crc kubenswrapper[4693]: I1212 16:53:41.600891 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-42bmp"] Dec 12 16:53:43 crc kubenswrapper[4693]: I1212 16:53:43.516205 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-42bmp" podUID="23b0cccd-5975-430f-9566-073f6f9606b0" containerName="registry-server" containerID="cri-o://6d53538410cb78ce716eee7a8c0223f93fec363eac481688bba27f81f4520b6c" gracePeriod=2 Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.034178 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-42bmp" Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.048590 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b0cccd-5975-430f-9566-073f6f9606b0-catalog-content\") pod \"23b0cccd-5975-430f-9566-073f6f9606b0\" (UID: \"23b0cccd-5975-430f-9566-073f6f9606b0\") " Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.048716 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b0cccd-5975-430f-9566-073f6f9606b0-utilities\") pod \"23b0cccd-5975-430f-9566-073f6f9606b0\" (UID: \"23b0cccd-5975-430f-9566-073f6f9606b0\") " Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.048889 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k22mt\" (UniqueName: \"kubernetes.io/projected/23b0cccd-5975-430f-9566-073f6f9606b0-kube-api-access-k22mt\") pod \"23b0cccd-5975-430f-9566-073f6f9606b0\" (UID: \"23b0cccd-5975-430f-9566-073f6f9606b0\") " Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.049875 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23b0cccd-5975-430f-9566-073f6f9606b0-utilities" (OuterVolumeSpecName: "utilities") pod "23b0cccd-5975-430f-9566-073f6f9606b0" (UID: "23b0cccd-5975-430f-9566-073f6f9606b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.056242 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23b0cccd-5975-430f-9566-073f6f9606b0-kube-api-access-k22mt" (OuterVolumeSpecName: "kube-api-access-k22mt") pod "23b0cccd-5975-430f-9566-073f6f9606b0" (UID: "23b0cccd-5975-430f-9566-073f6f9606b0"). InnerVolumeSpecName "kube-api-access-k22mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.120865 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23b0cccd-5975-430f-9566-073f6f9606b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23b0cccd-5975-430f-9566-073f6f9606b0" (UID: "23b0cccd-5975-430f-9566-073f6f9606b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.161115 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b0cccd-5975-430f-9566-073f6f9606b0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.161437 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b0cccd-5975-430f-9566-073f6f9606b0-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.161522 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k22mt\" (UniqueName: \"kubernetes.io/projected/23b0cccd-5975-430f-9566-073f6f9606b0-kube-api-access-k22mt\") on node \"crc\" DevicePath \"\"" Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.529029 4693 generic.go:334] "Generic (PLEG): container finished" podID="23b0cccd-5975-430f-9566-073f6f9606b0" containerID="6d53538410cb78ce716eee7a8c0223f93fec363eac481688bba27f81f4520b6c" exitCode=0 Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.529073 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42bmp" event={"ID":"23b0cccd-5975-430f-9566-073f6f9606b0","Type":"ContainerDied","Data":"6d53538410cb78ce716eee7a8c0223f93fec363eac481688bba27f81f4520b6c"} Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.529101 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42bmp" event={"ID":"23b0cccd-5975-430f-9566-073f6f9606b0","Type":"ContainerDied","Data":"0800e1a93df9ac25c0b460faa87a8fcbad3888075e50a71c6e71be8a767e7283"} Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.529118 4693 scope.go:117] "RemoveContainer" containerID="6d53538410cb78ce716eee7a8c0223f93fec363eac481688bba27f81f4520b6c" Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.529130 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-42bmp" Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.556030 4693 scope.go:117] "RemoveContainer" containerID="c3f6f6af67f6eabf9b3545cf3945f79ae99c8283a75b8ed9de136838a2569a6a" Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.572569 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-42bmp"] Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.586153 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-42bmp"] Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.586595 4693 scope.go:117] "RemoveContainer" containerID="96570d00f3942ebe87ce4480ca8b399f22ce8be290aa14947e38ce218951aa9f" Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.652977 4693 scope.go:117] "RemoveContainer" containerID="6d53538410cb78ce716eee7a8c0223f93fec363eac481688bba27f81f4520b6c" Dec 12 16:53:44 crc kubenswrapper[4693]: E1212 16:53:44.653382 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d53538410cb78ce716eee7a8c0223f93fec363eac481688bba27f81f4520b6c\": container with ID starting with 6d53538410cb78ce716eee7a8c0223f93fec363eac481688bba27f81f4520b6c not found: ID does not exist" containerID="6d53538410cb78ce716eee7a8c0223f93fec363eac481688bba27f81f4520b6c" Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.653426 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d53538410cb78ce716eee7a8c0223f93fec363eac481688bba27f81f4520b6c"} err="failed to get container status \"6d53538410cb78ce716eee7a8c0223f93fec363eac481688bba27f81f4520b6c\": rpc error: code = NotFound desc = could not find container \"6d53538410cb78ce716eee7a8c0223f93fec363eac481688bba27f81f4520b6c\": container with ID starting with 6d53538410cb78ce716eee7a8c0223f93fec363eac481688bba27f81f4520b6c not found: ID does not exist" Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.653454 4693 scope.go:117] "RemoveContainer" containerID="c3f6f6af67f6eabf9b3545cf3945f79ae99c8283a75b8ed9de136838a2569a6a" Dec 12 16:53:44 crc kubenswrapper[4693]: E1212 16:53:44.653710 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3f6f6af67f6eabf9b3545cf3945f79ae99c8283a75b8ed9de136838a2569a6a\": container with ID starting with c3f6f6af67f6eabf9b3545cf3945f79ae99c8283a75b8ed9de136838a2569a6a not found: ID does not exist" containerID="c3f6f6af67f6eabf9b3545cf3945f79ae99c8283a75b8ed9de136838a2569a6a" Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.653741 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3f6f6af67f6eabf9b3545cf3945f79ae99c8283a75b8ed9de136838a2569a6a"} err="failed to get container status \"c3f6f6af67f6eabf9b3545cf3945f79ae99c8283a75b8ed9de136838a2569a6a\": rpc error: code = NotFound desc = could not find container \"c3f6f6af67f6eabf9b3545cf3945f79ae99c8283a75b8ed9de136838a2569a6a\": container with ID starting with c3f6f6af67f6eabf9b3545cf3945f79ae99c8283a75b8ed9de136838a2569a6a not found: ID does not exist" Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.653764 4693 scope.go:117] "RemoveContainer" containerID="96570d00f3942ebe87ce4480ca8b399f22ce8be290aa14947e38ce218951aa9f" Dec 12 16:53:44 crc kubenswrapper[4693]: E1212 16:53:44.654185 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96570d00f3942ebe87ce4480ca8b399f22ce8be290aa14947e38ce218951aa9f\": container with ID starting with 96570d00f3942ebe87ce4480ca8b399f22ce8be290aa14947e38ce218951aa9f not found: ID does not exist" containerID="96570d00f3942ebe87ce4480ca8b399f22ce8be290aa14947e38ce218951aa9f" Dec 12 16:53:44 crc kubenswrapper[4693]: I1212 16:53:44.654214 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96570d00f3942ebe87ce4480ca8b399f22ce8be290aa14947e38ce218951aa9f"} err="failed to get container status \"96570d00f3942ebe87ce4480ca8b399f22ce8be290aa14947e38ce218951aa9f\": rpc error: code = NotFound desc = could not find container \"96570d00f3942ebe87ce4480ca8b399f22ce8be290aa14947e38ce218951aa9f\": container with ID starting with 96570d00f3942ebe87ce4480ca8b399f22ce8be290aa14947e38ce218951aa9f not found: ID does not exist" Dec 12 16:53:45 crc kubenswrapper[4693]: I1212 16:53:45.370588 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23b0cccd-5975-430f-9566-073f6f9606b0" path="/var/lib/kubelet/pods/23b0cccd-5975-430f-9566-073f6f9606b0/volumes" Dec 12 16:55:12 crc kubenswrapper[4693]: I1212 16:55:12.531079 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:55:12 crc kubenswrapper[4693]: I1212 16:55:12.532826 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:55:38 crc kubenswrapper[4693]: I1212 16:55:38.079571 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rk4j6"] Dec 12 16:55:38 crc kubenswrapper[4693]: E1212 16:55:38.080823 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b0cccd-5975-430f-9566-073f6f9606b0" containerName="extract-utilities" Dec 12 16:55:38 crc kubenswrapper[4693]: I1212 16:55:38.080851 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b0cccd-5975-430f-9566-073f6f9606b0" containerName="extract-utilities" Dec 12 16:55:38 crc kubenswrapper[4693]: E1212 16:55:38.080867 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b0cccd-5975-430f-9566-073f6f9606b0" containerName="registry-server" Dec 12 16:55:38 crc kubenswrapper[4693]: I1212 16:55:38.080874 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b0cccd-5975-430f-9566-073f6f9606b0" containerName="registry-server" Dec 12 16:55:38 crc kubenswrapper[4693]: E1212 16:55:38.080934 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b0cccd-5975-430f-9566-073f6f9606b0" containerName="extract-content" Dec 12 16:55:38 crc kubenswrapper[4693]: I1212 16:55:38.080941 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b0cccd-5975-430f-9566-073f6f9606b0" containerName="extract-content" Dec 12 16:55:38 crc kubenswrapper[4693]: I1212 16:55:38.081226 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="23b0cccd-5975-430f-9566-073f6f9606b0" containerName="registry-server" Dec 12 16:55:38 crc kubenswrapper[4693]: I1212 16:55:38.083488 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rk4j6" Dec 12 16:55:38 crc kubenswrapper[4693]: I1212 16:55:38.109531 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rk4j6"] Dec 12 16:55:38 crc kubenswrapper[4693]: I1212 16:55:38.153100 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b84582c2-3b38-40c9-8aee-c66fb509382b-catalog-content\") pod \"certified-operators-rk4j6\" (UID: \"b84582c2-3b38-40c9-8aee-c66fb509382b\") " pod="openshift-marketplace/certified-operators-rk4j6" Dec 12 16:55:38 crc kubenswrapper[4693]: I1212 16:55:38.153201 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b84582c2-3b38-40c9-8aee-c66fb509382b-utilities\") pod \"certified-operators-rk4j6\" (UID: \"b84582c2-3b38-40c9-8aee-c66fb509382b\") " pod="openshift-marketplace/certified-operators-rk4j6" Dec 12 16:55:38 crc kubenswrapper[4693]: I1212 16:55:38.153244 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx7x6\" (UniqueName: \"kubernetes.io/projected/b84582c2-3b38-40c9-8aee-c66fb509382b-kube-api-access-vx7x6\") pod \"certified-operators-rk4j6\" (UID: \"b84582c2-3b38-40c9-8aee-c66fb509382b\") " pod="openshift-marketplace/certified-operators-rk4j6" Dec 12 16:55:38 crc kubenswrapper[4693]: I1212 16:55:38.255797 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b84582c2-3b38-40c9-8aee-c66fb509382b-catalog-content\") pod \"certified-operators-rk4j6\" (UID: \"b84582c2-3b38-40c9-8aee-c66fb509382b\") " pod="openshift-marketplace/certified-operators-rk4j6" Dec 12 16:55:38 crc kubenswrapper[4693]: I1212 16:55:38.255938 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b84582c2-3b38-40c9-8aee-c66fb509382b-utilities\") pod \"certified-operators-rk4j6\" (UID: \"b84582c2-3b38-40c9-8aee-c66fb509382b\") " pod="openshift-marketplace/certified-operators-rk4j6" Dec 12 16:55:38 crc kubenswrapper[4693]: I1212 16:55:38.255997 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx7x6\" (UniqueName: \"kubernetes.io/projected/b84582c2-3b38-40c9-8aee-c66fb509382b-kube-api-access-vx7x6\") pod \"certified-operators-rk4j6\" (UID: \"b84582c2-3b38-40c9-8aee-c66fb509382b\") " pod="openshift-marketplace/certified-operators-rk4j6" Dec 12 16:55:38 crc kubenswrapper[4693]: I1212 16:55:38.256346 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b84582c2-3b38-40c9-8aee-c66fb509382b-catalog-content\") pod \"certified-operators-rk4j6\" (UID: \"b84582c2-3b38-40c9-8aee-c66fb509382b\") " pod="openshift-marketplace/certified-operators-rk4j6" Dec 12 16:55:38 crc kubenswrapper[4693]: I1212 16:55:38.256437 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b84582c2-3b38-40c9-8aee-c66fb509382b-utilities\") pod \"certified-operators-rk4j6\" (UID: \"b84582c2-3b38-40c9-8aee-c66fb509382b\") " pod="openshift-marketplace/certified-operators-rk4j6" Dec 12 16:55:38 crc kubenswrapper[4693]: I1212 16:55:38.276747 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx7x6\" (UniqueName: \"kubernetes.io/projected/b84582c2-3b38-40c9-8aee-c66fb509382b-kube-api-access-vx7x6\") pod \"certified-operators-rk4j6\" (UID: \"b84582c2-3b38-40c9-8aee-c66fb509382b\") " pod="openshift-marketplace/certified-operators-rk4j6" Dec 12 16:55:38 crc kubenswrapper[4693]: I1212 16:55:38.403659 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rk4j6" Dec 12 16:55:38 crc kubenswrapper[4693]: I1212 16:55:38.946801 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rk4j6"] Dec 12 16:55:39 crc kubenswrapper[4693]: I1212 16:55:39.486901 4693 generic.go:334] "Generic (PLEG): container finished" podID="b84582c2-3b38-40c9-8aee-c66fb509382b" containerID="1f39516da1ea14ef369f5d76d048ae8d302c90956a3b4c92464f4a6f69fbb309" exitCode=0 Dec 12 16:55:39 crc kubenswrapper[4693]: I1212 16:55:39.487184 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk4j6" event={"ID":"b84582c2-3b38-40c9-8aee-c66fb509382b","Type":"ContainerDied","Data":"1f39516da1ea14ef369f5d76d048ae8d302c90956a3b4c92464f4a6f69fbb309"} Dec 12 16:55:39 crc kubenswrapper[4693]: I1212 16:55:39.487208 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk4j6" event={"ID":"b84582c2-3b38-40c9-8aee-c66fb509382b","Type":"ContainerStarted","Data":"75d6225ac664ec1223310536442d20e30df203f4d25a5cabd5709509ca25e331"} Dec 12 16:55:40 crc kubenswrapper[4693]: I1212 16:55:40.509167 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk4j6" event={"ID":"b84582c2-3b38-40c9-8aee-c66fb509382b","Type":"ContainerStarted","Data":"f9851131d8ef66361a567ef6e6a03f816d450ec18bb8909498c30b56c52f26ba"} Dec 12 16:55:41 crc kubenswrapper[4693]: I1212 16:55:41.521533 4693 generic.go:334] "Generic (PLEG): container finished" podID="b84582c2-3b38-40c9-8aee-c66fb509382b" containerID="f9851131d8ef66361a567ef6e6a03f816d450ec18bb8909498c30b56c52f26ba" exitCode=0 Dec 12 16:55:41 crc kubenswrapper[4693]: I1212 16:55:41.521814 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk4j6" event={"ID":"b84582c2-3b38-40c9-8aee-c66fb509382b","Type":"ContainerDied","Data":"f9851131d8ef66361a567ef6e6a03f816d450ec18bb8909498c30b56c52f26ba"} Dec 12 16:55:42 crc kubenswrapper[4693]: I1212 16:55:42.532096 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:55:42 crc kubenswrapper[4693]: I1212 16:55:42.532645 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:55:42 crc kubenswrapper[4693]: I1212 16:55:42.534421 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk4j6" event={"ID":"b84582c2-3b38-40c9-8aee-c66fb509382b","Type":"ContainerStarted","Data":"b6b858f87f0e3daf5cfeec7bde60d3956b70fb8f56ed91350780574e7f9c1bb5"} Dec 12 16:55:42 crc kubenswrapper[4693]: I1212 16:55:42.575097 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rk4j6" podStartSLOduration=2.079386455 podStartE2EDuration="4.575081838s" podCreationTimestamp="2025-12-12 16:55:38 +0000 UTC" firstStartedPulling="2025-12-12 16:55:39.488504038 +0000 UTC m=+4166.657143639" lastFinishedPulling="2025-12-12 16:55:41.984199421 +0000 UTC m=+4169.152839022" observedRunningTime="2025-12-12 16:55:42.572098838 +0000 UTC m=+4169.740738439" watchObservedRunningTime="2025-12-12 16:55:42.575081838 +0000 UTC m=+4169.743721429" Dec 12 16:55:48 crc kubenswrapper[4693]: I1212 16:55:48.404855 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rk4j6" Dec 12 16:55:48 crc kubenswrapper[4693]: I1212 16:55:48.405538 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rk4j6" Dec 12 16:55:48 crc kubenswrapper[4693]: I1212 16:55:48.460327 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rk4j6" Dec 12 16:55:49 crc kubenswrapper[4693]: I1212 16:55:49.342160 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rk4j6" Dec 12 16:55:49 crc kubenswrapper[4693]: I1212 16:55:49.402174 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rk4j6"] Dec 12 16:55:50 crc kubenswrapper[4693]: I1212 16:55:50.634344 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rk4j6" podUID="b84582c2-3b38-40c9-8aee-c66fb509382b" containerName="registry-server" containerID="cri-o://b6b858f87f0e3daf5cfeec7bde60d3956b70fb8f56ed91350780574e7f9c1bb5" gracePeriod=2 Dec 12 16:55:51 crc kubenswrapper[4693]: I1212 16:55:51.654040 4693 generic.go:334] "Generic (PLEG): container finished" podID="b84582c2-3b38-40c9-8aee-c66fb509382b" containerID="b6b858f87f0e3daf5cfeec7bde60d3956b70fb8f56ed91350780574e7f9c1bb5" exitCode=0 Dec 12 16:55:51 crc kubenswrapper[4693]: I1212 16:55:51.654336 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk4j6" event={"ID":"b84582c2-3b38-40c9-8aee-c66fb509382b","Type":"ContainerDied","Data":"b6b858f87f0e3daf5cfeec7bde60d3956b70fb8f56ed91350780574e7f9c1bb5"} Dec 12 16:55:51 crc kubenswrapper[4693]: I1212 16:55:51.654721 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk4j6" event={"ID":"b84582c2-3b38-40c9-8aee-c66fb509382b","Type":"ContainerDied","Data":"75d6225ac664ec1223310536442d20e30df203f4d25a5cabd5709509ca25e331"} Dec 12 16:55:51 crc kubenswrapper[4693]: I1212 16:55:51.654743 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75d6225ac664ec1223310536442d20e30df203f4d25a5cabd5709509ca25e331" Dec 12 16:55:51 crc kubenswrapper[4693]: I1212 16:55:51.731064 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rk4j6" Dec 12 16:55:51 crc kubenswrapper[4693]: I1212 16:55:51.818652 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx7x6\" (UniqueName: \"kubernetes.io/projected/b84582c2-3b38-40c9-8aee-c66fb509382b-kube-api-access-vx7x6\") pod \"b84582c2-3b38-40c9-8aee-c66fb509382b\" (UID: \"b84582c2-3b38-40c9-8aee-c66fb509382b\") " Dec 12 16:55:51 crc kubenswrapper[4693]: I1212 16:55:51.818757 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b84582c2-3b38-40c9-8aee-c66fb509382b-utilities\") pod \"b84582c2-3b38-40c9-8aee-c66fb509382b\" (UID: \"b84582c2-3b38-40c9-8aee-c66fb509382b\") " Dec 12 16:55:51 crc kubenswrapper[4693]: I1212 16:55:51.818863 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b84582c2-3b38-40c9-8aee-c66fb509382b-catalog-content\") pod \"b84582c2-3b38-40c9-8aee-c66fb509382b\" (UID: \"b84582c2-3b38-40c9-8aee-c66fb509382b\") " Dec 12 16:55:51 crc kubenswrapper[4693]: I1212 16:55:51.822559 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b84582c2-3b38-40c9-8aee-c66fb509382b-utilities" (OuterVolumeSpecName: "utilities") pod "b84582c2-3b38-40c9-8aee-c66fb509382b" (UID: "b84582c2-3b38-40c9-8aee-c66fb509382b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:55:51 crc kubenswrapper[4693]: I1212 16:55:51.833762 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b84582c2-3b38-40c9-8aee-c66fb509382b-kube-api-access-vx7x6" (OuterVolumeSpecName: "kube-api-access-vx7x6") pod "b84582c2-3b38-40c9-8aee-c66fb509382b" (UID: "b84582c2-3b38-40c9-8aee-c66fb509382b"). InnerVolumeSpecName "kube-api-access-vx7x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 16:55:51 crc kubenswrapper[4693]: I1212 16:55:51.882615 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b84582c2-3b38-40c9-8aee-c66fb509382b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b84582c2-3b38-40c9-8aee-c66fb509382b" (UID: "b84582c2-3b38-40c9-8aee-c66fb509382b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 16:55:51 crc kubenswrapper[4693]: I1212 16:55:51.921464 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b84582c2-3b38-40c9-8aee-c66fb509382b-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:55:51 crc kubenswrapper[4693]: I1212 16:55:51.921501 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b84582c2-3b38-40c9-8aee-c66fb509382b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:55:51 crc kubenswrapper[4693]: I1212 16:55:51.921514 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx7x6\" (UniqueName: \"kubernetes.io/projected/b84582c2-3b38-40c9-8aee-c66fb509382b-kube-api-access-vx7x6\") on node \"crc\" DevicePath \"\"" Dec 12 16:55:52 crc kubenswrapper[4693]: I1212 16:55:52.667563 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rk4j6" Dec 12 16:55:52 crc kubenswrapper[4693]: I1212 16:55:52.726745 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rk4j6"] Dec 12 16:55:52 crc kubenswrapper[4693]: I1212 16:55:52.746970 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rk4j6"] Dec 12 16:55:53 crc kubenswrapper[4693]: I1212 16:55:53.375717 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b84582c2-3b38-40c9-8aee-c66fb509382b" path="/var/lib/kubelet/pods/b84582c2-3b38-40c9-8aee-c66fb509382b/volumes" Dec 12 16:56:12 crc kubenswrapper[4693]: I1212 16:56:12.530392 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:56:12 crc kubenswrapper[4693]: I1212 16:56:12.530909 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:56:12 crc kubenswrapper[4693]: I1212 16:56:12.530967 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 16:56:12 crc kubenswrapper[4693]: I1212 16:56:12.532084 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4bc5886d2bac521b1340d46b8799ca6fd8658509c0ec214ab3af9c4d4338c79"} pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:56:12 crc kubenswrapper[4693]: I1212 16:56:12.532159 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" containerID="cri-o://c4bc5886d2bac521b1340d46b8799ca6fd8658509c0ec214ab3af9c4d4338c79" gracePeriod=600 Dec 12 16:56:12 crc kubenswrapper[4693]: I1212 16:56:12.917962 4693 generic.go:334] "Generic (PLEG): container finished" podID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerID="c4bc5886d2bac521b1340d46b8799ca6fd8658509c0ec214ab3af9c4d4338c79" exitCode=0 Dec 12 16:56:12 crc kubenswrapper[4693]: I1212 16:56:12.918096 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerDied","Data":"c4bc5886d2bac521b1340d46b8799ca6fd8658509c0ec214ab3af9c4d4338c79"} Dec 12 16:56:12 crc kubenswrapper[4693]: I1212 16:56:12.918539 4693 scope.go:117] "RemoveContainer" containerID="c45ddd31162731cd9021c449fde896ca5ec5fbd7b26f8024ecd1959b49095273" Dec 12 16:56:13 crc kubenswrapper[4693]: I1212 16:56:13.941473 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerStarted","Data":"5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90"} Dec 12 16:57:52 crc kubenswrapper[4693]: E1212 16:57:52.321305 4693 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.204:55100->38.102.83.204:43805: write tcp 38.102.83.204:55100->38.102.83.204:43805: write: connection reset by peer Dec 12 16:58:12 crc kubenswrapper[4693]: I1212 16:58:12.530311 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:58:12 crc kubenswrapper[4693]: I1212 16:58:12.530888 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:58:42 crc kubenswrapper[4693]: I1212 16:58:42.530343 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:58:42 crc kubenswrapper[4693]: I1212 16:58:42.530778 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:59:12 crc kubenswrapper[4693]: I1212 16:59:12.531091 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:59:12 crc kubenswrapper[4693]: I1212 16:59:12.531782 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:59:12 crc kubenswrapper[4693]: I1212 16:59:12.531879 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 16:59:12 crc kubenswrapper[4693]: I1212 16:59:12.533331 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90"} pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:59:12 crc kubenswrapper[4693]: I1212 16:59:12.533449 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" containerID="cri-o://5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" gracePeriod=600 Dec 12 16:59:12 crc kubenswrapper[4693]: E1212 16:59:12.664516 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:59:13 crc kubenswrapper[4693]: I1212 16:59:13.486873 4693 generic.go:334] "Generic (PLEG): container finished" podID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" exitCode=0 Dec 12 16:59:13 crc kubenswrapper[4693]: I1212 16:59:13.486934 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerDied","Data":"5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90"} Dec 12 16:59:13 crc kubenswrapper[4693]: I1212 16:59:13.486999 4693 scope.go:117] "RemoveContainer" containerID="c4bc5886d2bac521b1340d46b8799ca6fd8658509c0ec214ab3af9c4d4338c79" Dec 12 16:59:13 crc kubenswrapper[4693]: I1212 16:59:13.489111 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 16:59:13 crc kubenswrapper[4693]: E1212 16:59:13.490633 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:59:25 crc kubenswrapper[4693]: I1212 16:59:25.357465 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 16:59:25 crc kubenswrapper[4693]: E1212 16:59:25.358497 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:59:36 crc kubenswrapper[4693]: I1212 16:59:36.357501 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 16:59:36 crc kubenswrapper[4693]: E1212 16:59:36.358080 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 16:59:50 crc kubenswrapper[4693]: I1212 16:59:50.357240 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 16:59:50 crc kubenswrapper[4693]: E1212 16:59:50.359868 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:00:00 crc kubenswrapper[4693]: I1212 17:00:00.191881 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425980-xmvbj"] Dec 12 17:00:00 crc kubenswrapper[4693]: E1212 17:00:00.193216 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b84582c2-3b38-40c9-8aee-c66fb509382b" containerName="registry-server" Dec 12 17:00:00 crc kubenswrapper[4693]: I1212 17:00:00.193241 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="b84582c2-3b38-40c9-8aee-c66fb509382b" containerName="registry-server" Dec 12 17:00:00 crc kubenswrapper[4693]: E1212 17:00:00.193253 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b84582c2-3b38-40c9-8aee-c66fb509382b" containerName="extract-content" Dec 12 17:00:00 crc kubenswrapper[4693]: I1212 17:00:00.193258 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="b84582c2-3b38-40c9-8aee-c66fb509382b" containerName="extract-content" Dec 12 17:00:00 crc kubenswrapper[4693]: E1212 17:00:00.193331 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b84582c2-3b38-40c9-8aee-c66fb509382b" containerName="extract-utilities" Dec 12 17:00:00 crc kubenswrapper[4693]: I1212 17:00:00.193339 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="b84582c2-3b38-40c9-8aee-c66fb509382b" containerName="extract-utilities" Dec 12 17:00:00 crc kubenswrapper[4693]: I1212 17:00:00.193613 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="b84582c2-3b38-40c9-8aee-c66fb509382b" containerName="registry-server" Dec 12 17:00:00 crc kubenswrapper[4693]: I1212 17:00:00.194489 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425980-xmvbj" Dec 12 17:00:00 crc kubenswrapper[4693]: I1212 17:00:00.197517 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 12 17:00:00 crc kubenswrapper[4693]: I1212 17:00:00.199187 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 12 17:00:00 crc kubenswrapper[4693]: I1212 17:00:00.206738 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425980-xmvbj"] Dec 12 17:00:00 crc kubenswrapper[4693]: I1212 17:00:00.303469 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93c2dafe-ceeb-480c-a8b4-3c0c839ad90b-config-volume\") pod \"collect-profiles-29425980-xmvbj\" (UID: \"93c2dafe-ceeb-480c-a8b4-3c0c839ad90b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425980-xmvbj" Dec 12 17:00:00 crc kubenswrapper[4693]: I1212 17:00:00.303569 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lcnx\" (UniqueName: \"kubernetes.io/projected/93c2dafe-ceeb-480c-a8b4-3c0c839ad90b-kube-api-access-5lcnx\") pod \"collect-profiles-29425980-xmvbj\" (UID: \"93c2dafe-ceeb-480c-a8b4-3c0c839ad90b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425980-xmvbj" Dec 12 17:00:00 crc kubenswrapper[4693]: I1212 17:00:00.303668 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93c2dafe-ceeb-480c-a8b4-3c0c839ad90b-secret-volume\") pod \"collect-profiles-29425980-xmvbj\" (UID: \"93c2dafe-ceeb-480c-a8b4-3c0c839ad90b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425980-xmvbj" Dec 12 17:00:00 crc kubenswrapper[4693]: I1212 17:00:00.406387 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93c2dafe-ceeb-480c-a8b4-3c0c839ad90b-secret-volume\") pod \"collect-profiles-29425980-xmvbj\" (UID: \"93c2dafe-ceeb-480c-a8b4-3c0c839ad90b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425980-xmvbj" Dec 12 17:00:00 crc kubenswrapper[4693]: I1212 17:00:00.406849 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93c2dafe-ceeb-480c-a8b4-3c0c839ad90b-config-volume\") pod \"collect-profiles-29425980-xmvbj\" (UID: \"93c2dafe-ceeb-480c-a8b4-3c0c839ad90b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425980-xmvbj" Dec 12 17:00:00 crc kubenswrapper[4693]: I1212 17:00:00.406978 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lcnx\" (UniqueName: \"kubernetes.io/projected/93c2dafe-ceeb-480c-a8b4-3c0c839ad90b-kube-api-access-5lcnx\") pod \"collect-profiles-29425980-xmvbj\" (UID: \"93c2dafe-ceeb-480c-a8b4-3c0c839ad90b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425980-xmvbj" Dec 12 17:00:00 crc kubenswrapper[4693]: I1212 17:00:00.407904 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93c2dafe-ceeb-480c-a8b4-3c0c839ad90b-config-volume\") pod \"collect-profiles-29425980-xmvbj\" (UID: \"93c2dafe-ceeb-480c-a8b4-3c0c839ad90b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425980-xmvbj" Dec 12 17:00:00 crc kubenswrapper[4693]: I1212 17:00:00.415076 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93c2dafe-ceeb-480c-a8b4-3c0c839ad90b-secret-volume\") pod \"collect-profiles-29425980-xmvbj\" (UID: \"93c2dafe-ceeb-480c-a8b4-3c0c839ad90b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425980-xmvbj" Dec 12 17:00:00 crc kubenswrapper[4693]: I1212 17:00:00.426577 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lcnx\" (UniqueName: \"kubernetes.io/projected/93c2dafe-ceeb-480c-a8b4-3c0c839ad90b-kube-api-access-5lcnx\") pod \"collect-profiles-29425980-xmvbj\" (UID: \"93c2dafe-ceeb-480c-a8b4-3c0c839ad90b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425980-xmvbj" Dec 12 17:00:00 crc kubenswrapper[4693]: I1212 17:00:00.519508 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425980-xmvbj" Dec 12 17:00:01 crc kubenswrapper[4693]: I1212 17:00:01.022876 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425980-xmvbj"] Dec 12 17:00:01 crc kubenswrapper[4693]: I1212 17:00:01.122633 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425980-xmvbj" event={"ID":"93c2dafe-ceeb-480c-a8b4-3c0c839ad90b","Type":"ContainerStarted","Data":"0c856ea480d6b09caebbb411e8b6ba5a5425f9164f7b7386c2709599ee9b15fb"} Dec 12 17:00:01 crc kubenswrapper[4693]: I1212 17:00:01.357161 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 17:00:01 crc kubenswrapper[4693]: E1212 17:00:01.357889 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:00:02 crc kubenswrapper[4693]: I1212 17:00:02.135054 4693 generic.go:334] "Generic (PLEG): container finished" podID="93c2dafe-ceeb-480c-a8b4-3c0c839ad90b" containerID="fb5fd5cfba7e135864c3b6a02c88da49c9a074278e037788c3b70d28e9aeee18" exitCode=0 Dec 12 17:00:02 crc kubenswrapper[4693]: I1212 17:00:02.135095 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425980-xmvbj" event={"ID":"93c2dafe-ceeb-480c-a8b4-3c0c839ad90b","Type":"ContainerDied","Data":"fb5fd5cfba7e135864c3b6a02c88da49c9a074278e037788c3b70d28e9aeee18"} Dec 12 17:00:03 crc kubenswrapper[4693]: I1212 17:00:03.634258 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425980-xmvbj" Dec 12 17:00:03 crc kubenswrapper[4693]: I1212 17:00:03.812463 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcnx\" (UniqueName: \"kubernetes.io/projected/93c2dafe-ceeb-480c-a8b4-3c0c839ad90b-kube-api-access-5lcnx\") pod \"93c2dafe-ceeb-480c-a8b4-3c0c839ad90b\" (UID: \"93c2dafe-ceeb-480c-a8b4-3c0c839ad90b\") " Dec 12 17:00:03 crc kubenswrapper[4693]: I1212 17:00:03.812711 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93c2dafe-ceeb-480c-a8b4-3c0c839ad90b-config-volume\") pod \"93c2dafe-ceeb-480c-a8b4-3c0c839ad90b\" (UID: \"93c2dafe-ceeb-480c-a8b4-3c0c839ad90b\") " Dec 12 17:00:03 crc kubenswrapper[4693]: I1212 17:00:03.812736 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93c2dafe-ceeb-480c-a8b4-3c0c839ad90b-secret-volume\") pod \"93c2dafe-ceeb-480c-a8b4-3c0c839ad90b\" (UID: \"93c2dafe-ceeb-480c-a8b4-3c0c839ad90b\") " Dec 12 17:00:03 crc kubenswrapper[4693]: I1212 17:00:03.814473 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93c2dafe-ceeb-480c-a8b4-3c0c839ad90b-config-volume" (OuterVolumeSpecName: "config-volume") pod "93c2dafe-ceeb-480c-a8b4-3c0c839ad90b" (UID: "93c2dafe-ceeb-480c-a8b4-3c0c839ad90b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 17:00:03 crc kubenswrapper[4693]: I1212 17:00:03.825541 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c2dafe-ceeb-480c-a8b4-3c0c839ad90b-kube-api-access-5lcnx" (OuterVolumeSpecName: "kube-api-access-5lcnx") pod "93c2dafe-ceeb-480c-a8b4-3c0c839ad90b" (UID: "93c2dafe-ceeb-480c-a8b4-3c0c839ad90b"). InnerVolumeSpecName "kube-api-access-5lcnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 17:00:03 crc kubenswrapper[4693]: I1212 17:00:03.827621 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c2dafe-ceeb-480c-a8b4-3c0c839ad90b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "93c2dafe-ceeb-480c-a8b4-3c0c839ad90b" (UID: "93c2dafe-ceeb-480c-a8b4-3c0c839ad90b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 17:00:03 crc kubenswrapper[4693]: I1212 17:00:03.916770 4693 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93c2dafe-ceeb-480c-a8b4-3c0c839ad90b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 17:00:03 crc kubenswrapper[4693]: I1212 17:00:03.917133 4693 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93c2dafe-ceeb-480c-a8b4-3c0c839ad90b-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 17:00:03 crc kubenswrapper[4693]: I1212 17:00:03.917147 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lcnx\" (UniqueName: \"kubernetes.io/projected/93c2dafe-ceeb-480c-a8b4-3c0c839ad90b-kube-api-access-5lcnx\") on node \"crc\" DevicePath \"\"" Dec 12 17:00:04 crc kubenswrapper[4693]: I1212 17:00:04.179469 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425980-xmvbj" event={"ID":"93c2dafe-ceeb-480c-a8b4-3c0c839ad90b","Type":"ContainerDied","Data":"0c856ea480d6b09caebbb411e8b6ba5a5425f9164f7b7386c2709599ee9b15fb"} Dec 12 17:00:04 crc kubenswrapper[4693]: I1212 17:00:04.179503 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425980-xmvbj" Dec 12 17:00:04 crc kubenswrapper[4693]: I1212 17:00:04.179524 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c856ea480d6b09caebbb411e8b6ba5a5425f9164f7b7386c2709599ee9b15fb" Dec 12 17:00:04 crc kubenswrapper[4693]: I1212 17:00:04.723228 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425935-6984f"] Dec 12 17:00:04 crc kubenswrapper[4693]: I1212 17:00:04.736055 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425935-6984f"] Dec 12 17:00:05 crc kubenswrapper[4693]: I1212 17:00:05.383654 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd1c81c4-e711-4356-a2a5-4647ae66ffb7" path="/var/lib/kubelet/pods/bd1c81c4-e711-4356-a2a5-4647ae66ffb7/volumes" Dec 12 17:00:14 crc kubenswrapper[4693]: I1212 17:00:14.357248 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 17:00:14 crc kubenswrapper[4693]: E1212 17:00:14.358118 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:00:26 crc kubenswrapper[4693]: I1212 17:00:26.358430 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 17:00:26 crc kubenswrapper[4693]: E1212 17:00:26.359650 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:00:40 crc kubenswrapper[4693]: I1212 17:00:40.358711 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 17:00:40 crc kubenswrapper[4693]: E1212 17:00:40.359612 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:00:52 crc kubenswrapper[4693]: I1212 17:00:52.360816 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 17:00:52 crc kubenswrapper[4693]: E1212 17:00:52.362016 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:01:00 crc kubenswrapper[4693]: I1212 17:01:00.183535 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29425981-9qdn4"] Dec 12 17:01:00 crc kubenswrapper[4693]: E1212 17:01:00.184897 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c2dafe-ceeb-480c-a8b4-3c0c839ad90b" containerName="collect-profiles" Dec 12 17:01:00 crc kubenswrapper[4693]: I1212 17:01:00.184932 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c2dafe-ceeb-480c-a8b4-3c0c839ad90b" containerName="collect-profiles" Dec 12 17:01:00 crc kubenswrapper[4693]: I1212 17:01:00.185444 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="93c2dafe-ceeb-480c-a8b4-3c0c839ad90b" containerName="collect-profiles" Dec 12 17:01:00 crc kubenswrapper[4693]: I1212 17:01:00.186761 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29425981-9qdn4" Dec 12 17:01:00 crc kubenswrapper[4693]: I1212 17:01:00.220654 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29425981-9qdn4"] Dec 12 17:01:00 crc kubenswrapper[4693]: I1212 17:01:00.381556 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbzmt\" (UniqueName: \"kubernetes.io/projected/73c05817-cc0b-41d9-8e36-a34a28489c55-kube-api-access-rbzmt\") pod \"keystone-cron-29425981-9qdn4\" (UID: \"73c05817-cc0b-41d9-8e36-a34a28489c55\") " pod="openstack/keystone-cron-29425981-9qdn4" Dec 12 17:01:00 crc kubenswrapper[4693]: I1212 17:01:00.381639 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/73c05817-cc0b-41d9-8e36-a34a28489c55-fernet-keys\") pod \"keystone-cron-29425981-9qdn4\" (UID: \"73c05817-cc0b-41d9-8e36-a34a28489c55\") " pod="openstack/keystone-cron-29425981-9qdn4" Dec 12 17:01:00 crc kubenswrapper[4693]: I1212 17:01:00.381674 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c05817-cc0b-41d9-8e36-a34a28489c55-combined-ca-bundle\") pod \"keystone-cron-29425981-9qdn4\" (UID: \"73c05817-cc0b-41d9-8e36-a34a28489c55\") " pod="openstack/keystone-cron-29425981-9qdn4" Dec 12 17:01:00 crc kubenswrapper[4693]: I1212 17:01:00.381695 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c05817-cc0b-41d9-8e36-a34a28489c55-config-data\") pod \"keystone-cron-29425981-9qdn4\" (UID: \"73c05817-cc0b-41d9-8e36-a34a28489c55\") " pod="openstack/keystone-cron-29425981-9qdn4" Dec 12 17:01:00 crc kubenswrapper[4693]: I1212 17:01:00.483859 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbzmt\" (UniqueName: \"kubernetes.io/projected/73c05817-cc0b-41d9-8e36-a34a28489c55-kube-api-access-rbzmt\") pod \"keystone-cron-29425981-9qdn4\" (UID: \"73c05817-cc0b-41d9-8e36-a34a28489c55\") " pod="openstack/keystone-cron-29425981-9qdn4" Dec 12 17:01:00 crc kubenswrapper[4693]: I1212 17:01:00.483951 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/73c05817-cc0b-41d9-8e36-a34a28489c55-fernet-keys\") pod \"keystone-cron-29425981-9qdn4\" (UID: \"73c05817-cc0b-41d9-8e36-a34a28489c55\") " pod="openstack/keystone-cron-29425981-9qdn4" Dec 12 17:01:00 crc kubenswrapper[4693]: I1212 17:01:00.484013 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c05817-cc0b-41d9-8e36-a34a28489c55-combined-ca-bundle\") pod \"keystone-cron-29425981-9qdn4\" (UID: \"73c05817-cc0b-41d9-8e36-a34a28489c55\") " pod="openstack/keystone-cron-29425981-9qdn4" Dec 12 17:01:00 crc kubenswrapper[4693]: I1212 17:01:00.484031 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c05817-cc0b-41d9-8e36-a34a28489c55-config-data\") pod \"keystone-cron-29425981-9qdn4\" (UID: \"73c05817-cc0b-41d9-8e36-a34a28489c55\") " pod="openstack/keystone-cron-29425981-9qdn4" Dec 12 17:01:00 crc kubenswrapper[4693]: I1212 17:01:00.493573 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c05817-cc0b-41d9-8e36-a34a28489c55-config-data\") pod \"keystone-cron-29425981-9qdn4\" (UID: \"73c05817-cc0b-41d9-8e36-a34a28489c55\") " pod="openstack/keystone-cron-29425981-9qdn4" Dec 12 17:01:00 crc kubenswrapper[4693]: I1212 17:01:00.494670 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c05817-cc0b-41d9-8e36-a34a28489c55-combined-ca-bundle\") pod \"keystone-cron-29425981-9qdn4\" (UID: \"73c05817-cc0b-41d9-8e36-a34a28489c55\") " pod="openstack/keystone-cron-29425981-9qdn4" Dec 12 17:01:00 crc kubenswrapper[4693]: I1212 17:01:00.497104 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/73c05817-cc0b-41d9-8e36-a34a28489c55-fernet-keys\") pod \"keystone-cron-29425981-9qdn4\" (UID: \"73c05817-cc0b-41d9-8e36-a34a28489c55\") " pod="openstack/keystone-cron-29425981-9qdn4" Dec 12 17:01:00 crc kubenswrapper[4693]: I1212 17:01:00.506044 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbzmt\" (UniqueName: \"kubernetes.io/projected/73c05817-cc0b-41d9-8e36-a34a28489c55-kube-api-access-rbzmt\") pod \"keystone-cron-29425981-9qdn4\" (UID: \"73c05817-cc0b-41d9-8e36-a34a28489c55\") " pod="openstack/keystone-cron-29425981-9qdn4" Dec 12 17:01:00 crc kubenswrapper[4693]: I1212 17:01:00.506684 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29425981-9qdn4" Dec 12 17:01:01 crc kubenswrapper[4693]: I1212 17:01:01.011764 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29425981-9qdn4"] Dec 12 17:01:01 crc kubenswrapper[4693]: I1212 17:01:01.641480 4693 scope.go:117] "RemoveContainer" containerID="7b05f03eb1249f7e9f7c388c7d8be1b77a7c101461537a568b82c5a49ee69da8" Dec 12 17:01:01 crc kubenswrapper[4693]: I1212 17:01:01.882133 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29425981-9qdn4" event={"ID":"73c05817-cc0b-41d9-8e36-a34a28489c55","Type":"ContainerStarted","Data":"e2e203b5c178ece8f5867fc9a91bcc81018c43732d2d26650948d8a0aef2d347"} Dec 12 17:01:01 crc kubenswrapper[4693]: I1212 17:01:01.882195 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29425981-9qdn4" event={"ID":"73c05817-cc0b-41d9-8e36-a34a28489c55","Type":"ContainerStarted","Data":"d0dc5143ccc03e4554d2d4b4b6ed0675bd6ed77cb95ee45b6b720f3fe159b035"} Dec 12 17:01:01 crc kubenswrapper[4693]: I1212 17:01:01.912496 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29425981-9qdn4" podStartSLOduration=1.912474145 podStartE2EDuration="1.912474145s" podCreationTimestamp="2025-12-12 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:01:01.8988261 +0000 UTC m=+4489.067465711" watchObservedRunningTime="2025-12-12 17:01:01.912474145 +0000 UTC m=+4489.081113746" Dec 12 17:01:04 crc kubenswrapper[4693]: I1212 17:01:04.357101 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 17:01:04 crc kubenswrapper[4693]: E1212 17:01:04.358179 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:01:04 crc kubenswrapper[4693]: I1212 17:01:04.923574 4693 generic.go:334] "Generic (PLEG): container finished" podID="73c05817-cc0b-41d9-8e36-a34a28489c55" containerID="e2e203b5c178ece8f5867fc9a91bcc81018c43732d2d26650948d8a0aef2d347" exitCode=0 Dec 12 17:01:04 crc kubenswrapper[4693]: I1212 17:01:04.923655 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29425981-9qdn4" event={"ID":"73c05817-cc0b-41d9-8e36-a34a28489c55","Type":"ContainerDied","Data":"e2e203b5c178ece8f5867fc9a91bcc81018c43732d2d26650948d8a0aef2d347"} Dec 12 17:01:06 crc kubenswrapper[4693]: I1212 17:01:06.402383 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29425981-9qdn4" Dec 12 17:01:06 crc kubenswrapper[4693]: I1212 17:01:06.556170 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbzmt\" (UniqueName: \"kubernetes.io/projected/73c05817-cc0b-41d9-8e36-a34a28489c55-kube-api-access-rbzmt\") pod \"73c05817-cc0b-41d9-8e36-a34a28489c55\" (UID: \"73c05817-cc0b-41d9-8e36-a34a28489c55\") " Dec 12 17:01:06 crc kubenswrapper[4693]: I1212 17:01:06.556494 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/73c05817-cc0b-41d9-8e36-a34a28489c55-fernet-keys\") pod \"73c05817-cc0b-41d9-8e36-a34a28489c55\" (UID: \"73c05817-cc0b-41d9-8e36-a34a28489c55\") " Dec 12 17:01:06 crc kubenswrapper[4693]: I1212 17:01:06.556576 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c05817-cc0b-41d9-8e36-a34a28489c55-combined-ca-bundle\") pod \"73c05817-cc0b-41d9-8e36-a34a28489c55\" (UID: \"73c05817-cc0b-41d9-8e36-a34a28489c55\") " Dec 12 17:01:06 crc kubenswrapper[4693]: I1212 17:01:06.556626 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c05817-cc0b-41d9-8e36-a34a28489c55-config-data\") pod \"73c05817-cc0b-41d9-8e36-a34a28489c55\" (UID: \"73c05817-cc0b-41d9-8e36-a34a28489c55\") " Dec 12 17:01:06 crc kubenswrapper[4693]: I1212 17:01:06.562056 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73c05817-cc0b-41d9-8e36-a34a28489c55-kube-api-access-rbzmt" (OuterVolumeSpecName: "kube-api-access-rbzmt") pod "73c05817-cc0b-41d9-8e36-a34a28489c55" (UID: "73c05817-cc0b-41d9-8e36-a34a28489c55"). InnerVolumeSpecName "kube-api-access-rbzmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 17:01:06 crc kubenswrapper[4693]: I1212 17:01:06.563424 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c05817-cc0b-41d9-8e36-a34a28489c55-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "73c05817-cc0b-41d9-8e36-a34a28489c55" (UID: "73c05817-cc0b-41d9-8e36-a34a28489c55"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 17:01:06 crc kubenswrapper[4693]: I1212 17:01:06.594435 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c05817-cc0b-41d9-8e36-a34a28489c55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73c05817-cc0b-41d9-8e36-a34a28489c55" (UID: "73c05817-cc0b-41d9-8e36-a34a28489c55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 17:01:06 crc kubenswrapper[4693]: I1212 17:01:06.639259 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c05817-cc0b-41d9-8e36-a34a28489c55-config-data" (OuterVolumeSpecName: "config-data") pod "73c05817-cc0b-41d9-8e36-a34a28489c55" (UID: "73c05817-cc0b-41d9-8e36-a34a28489c55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 17:01:06 crc kubenswrapper[4693]: I1212 17:01:06.662658 4693 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/73c05817-cc0b-41d9-8e36-a34a28489c55-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 12 17:01:06 crc kubenswrapper[4693]: I1212 17:01:06.662695 4693 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c05817-cc0b-41d9-8e36-a34a28489c55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 17:01:06 crc kubenswrapper[4693]: I1212 17:01:06.662714 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c05817-cc0b-41d9-8e36-a34a28489c55-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 17:01:06 crc kubenswrapper[4693]: I1212 17:01:06.662731 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbzmt\" (UniqueName: \"kubernetes.io/projected/73c05817-cc0b-41d9-8e36-a34a28489c55-kube-api-access-rbzmt\") on node \"crc\" DevicePath \"\"" Dec 12 17:01:06 crc kubenswrapper[4693]: I1212 17:01:06.973166 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29425981-9qdn4" event={"ID":"73c05817-cc0b-41d9-8e36-a34a28489c55","Type":"ContainerDied","Data":"d0dc5143ccc03e4554d2d4b4b6ed0675bd6ed77cb95ee45b6b720f3fe159b035"} Dec 12 17:01:06 crc kubenswrapper[4693]: I1212 17:01:06.973394 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0dc5143ccc03e4554d2d4b4b6ed0675bd6ed77cb95ee45b6b720f3fe159b035" Dec 12 17:01:06 crc kubenswrapper[4693]: I1212 17:01:06.973205 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29425981-9qdn4" Dec 12 17:01:18 crc kubenswrapper[4693]: I1212 17:01:18.357247 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 17:01:18 crc kubenswrapper[4693]: E1212 17:01:18.357900 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:01:30 crc kubenswrapper[4693]: I1212 17:01:30.358546 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 17:01:30 crc kubenswrapper[4693]: E1212 17:01:30.359408 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:01:42 crc kubenswrapper[4693]: I1212 17:01:42.357462 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 17:01:42 crc kubenswrapper[4693]: E1212 17:01:42.358812 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:01:56 crc kubenswrapper[4693]: I1212 17:01:56.357070 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 17:01:56 crc kubenswrapper[4693]: E1212 17:01:56.357675 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:02:01 crc kubenswrapper[4693]: I1212 17:02:01.736352 4693 scope.go:117] "RemoveContainer" containerID="f9851131d8ef66361a567ef6e6a03f816d450ec18bb8909498c30b56c52f26ba" Dec 12 17:02:01 crc kubenswrapper[4693]: I1212 17:02:01.917678 4693 scope.go:117] "RemoveContainer" containerID="1f39516da1ea14ef369f5d76d048ae8d302c90956a3b4c92464f4a6f69fbb309" Dec 12 17:02:01 crc kubenswrapper[4693]: I1212 17:02:01.974175 4693 scope.go:117] "RemoveContainer" containerID="b6b858f87f0e3daf5cfeec7bde60d3956b70fb8f56ed91350780574e7f9c1bb5" Dec 12 17:02:07 crc kubenswrapper[4693]: I1212 17:02:07.357079 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 17:02:07 crc kubenswrapper[4693]: E1212 17:02:07.359315 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:02:10 crc kubenswrapper[4693]: I1212 17:02:10.297302 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j47h8"] Dec 12 17:02:10 crc kubenswrapper[4693]: E1212 17:02:10.298401 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73c05817-cc0b-41d9-8e36-a34a28489c55" containerName="keystone-cron" Dec 12 17:02:10 crc kubenswrapper[4693]: I1212 17:02:10.298417 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c05817-cc0b-41d9-8e36-a34a28489c55" containerName="keystone-cron" Dec 12 17:02:10 crc kubenswrapper[4693]: I1212 17:02:10.298705 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="73c05817-cc0b-41d9-8e36-a34a28489c55" containerName="keystone-cron" Dec 12 17:02:10 crc kubenswrapper[4693]: I1212 17:02:10.306354 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j47h8" Dec 12 17:02:10 crc kubenswrapper[4693]: I1212 17:02:10.319832 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j47h8"] Dec 12 17:02:10 crc kubenswrapper[4693]: I1212 17:02:10.353792 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc6789a-0731-46ed-8bd2-8ef2770610e8-utilities\") pod \"redhat-operators-j47h8\" (UID: \"abc6789a-0731-46ed-8bd2-8ef2770610e8\") " pod="openshift-marketplace/redhat-operators-j47h8" Dec 12 17:02:10 crc kubenswrapper[4693]: I1212 17:02:10.353927 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc6789a-0731-46ed-8bd2-8ef2770610e8-catalog-content\") pod \"redhat-operators-j47h8\" (UID: \"abc6789a-0731-46ed-8bd2-8ef2770610e8\") " pod="openshift-marketplace/redhat-operators-j47h8" Dec 12 17:02:10 crc kubenswrapper[4693]: I1212 17:02:10.354304 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcmr5\" (UniqueName: \"kubernetes.io/projected/abc6789a-0731-46ed-8bd2-8ef2770610e8-kube-api-access-jcmr5\") pod \"redhat-operators-j47h8\" (UID: \"abc6789a-0731-46ed-8bd2-8ef2770610e8\") " pod="openshift-marketplace/redhat-operators-j47h8" Dec 12 17:02:10 crc kubenswrapper[4693]: I1212 17:02:10.456424 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc6789a-0731-46ed-8bd2-8ef2770610e8-utilities\") pod \"redhat-operators-j47h8\" (UID: \"abc6789a-0731-46ed-8bd2-8ef2770610e8\") " pod="openshift-marketplace/redhat-operators-j47h8" Dec 12 17:02:10 crc kubenswrapper[4693]: I1212 17:02:10.456511 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc6789a-0731-46ed-8bd2-8ef2770610e8-catalog-content\") pod \"redhat-operators-j47h8\" (UID: \"abc6789a-0731-46ed-8bd2-8ef2770610e8\") " pod="openshift-marketplace/redhat-operators-j47h8" Dec 12 17:02:10 crc kubenswrapper[4693]: I1212 17:02:10.456624 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcmr5\" (UniqueName: \"kubernetes.io/projected/abc6789a-0731-46ed-8bd2-8ef2770610e8-kube-api-access-jcmr5\") pod \"redhat-operators-j47h8\" (UID: \"abc6789a-0731-46ed-8bd2-8ef2770610e8\") " pod="openshift-marketplace/redhat-operators-j47h8" Dec 12 17:02:10 crc kubenswrapper[4693]: I1212 17:02:10.458690 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc6789a-0731-46ed-8bd2-8ef2770610e8-utilities\") pod \"redhat-operators-j47h8\" (UID: \"abc6789a-0731-46ed-8bd2-8ef2770610e8\") " pod="openshift-marketplace/redhat-operators-j47h8" Dec 12 17:02:10 crc kubenswrapper[4693]: I1212 17:02:10.459366 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc6789a-0731-46ed-8bd2-8ef2770610e8-catalog-content\") pod \"redhat-operators-j47h8\" (UID: \"abc6789a-0731-46ed-8bd2-8ef2770610e8\") " pod="openshift-marketplace/redhat-operators-j47h8" Dec 12 17:02:10 crc kubenswrapper[4693]: I1212 17:02:10.486048 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcmr5\" (UniqueName: \"kubernetes.io/projected/abc6789a-0731-46ed-8bd2-8ef2770610e8-kube-api-access-jcmr5\") pod \"redhat-operators-j47h8\" (UID: \"abc6789a-0731-46ed-8bd2-8ef2770610e8\") " pod="openshift-marketplace/redhat-operators-j47h8" Dec 12 17:02:10 crc kubenswrapper[4693]: I1212 17:02:10.649899 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j47h8" Dec 12 17:02:11 crc kubenswrapper[4693]: I1212 17:02:11.185503 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j47h8"] Dec 12 17:02:11 crc kubenswrapper[4693]: W1212 17:02:11.193636 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc6789a_0731_46ed_8bd2_8ef2770610e8.slice/crio-0bb6c39393fde9cb2f5cf018664ee937aa44d92e60ab4754c52c8cc6ae3bd4d5 WatchSource:0}: Error finding container 0bb6c39393fde9cb2f5cf018664ee937aa44d92e60ab4754c52c8cc6ae3bd4d5: Status 404 returned error can't find the container with id 0bb6c39393fde9cb2f5cf018664ee937aa44d92e60ab4754c52c8cc6ae3bd4d5 Dec 12 17:02:11 crc kubenswrapper[4693]: I1212 17:02:11.766212 4693 generic.go:334] "Generic (PLEG): container finished" podID="abc6789a-0731-46ed-8bd2-8ef2770610e8" containerID="052e77bef7479a6540e3aecf3c7895635cf30973d5480204df34f17da9c366bf" exitCode=0 Dec 12 17:02:11 crc kubenswrapper[4693]: I1212 17:02:11.766313 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j47h8" event={"ID":"abc6789a-0731-46ed-8bd2-8ef2770610e8","Type":"ContainerDied","Data":"052e77bef7479a6540e3aecf3c7895635cf30973d5480204df34f17da9c366bf"} Dec 12 17:02:11 crc kubenswrapper[4693]: I1212 17:02:11.766588 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j47h8" event={"ID":"abc6789a-0731-46ed-8bd2-8ef2770610e8","Type":"ContainerStarted","Data":"0bb6c39393fde9cb2f5cf018664ee937aa44d92e60ab4754c52c8cc6ae3bd4d5"} Dec 12 17:02:11 crc kubenswrapper[4693]: I1212 17:02:11.768089 4693 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 17:02:13 crc kubenswrapper[4693]: I1212 17:02:13.799995 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j47h8" event={"ID":"abc6789a-0731-46ed-8bd2-8ef2770610e8","Type":"ContainerStarted","Data":"4dc92ba6c060f803c1eaf822280ef73f871d1f0b4ee194cf7d9304f77a96159c"} Dec 12 17:02:16 crc kubenswrapper[4693]: I1212 17:02:16.837666 4693 generic.go:334] "Generic (PLEG): container finished" podID="abc6789a-0731-46ed-8bd2-8ef2770610e8" containerID="4dc92ba6c060f803c1eaf822280ef73f871d1f0b4ee194cf7d9304f77a96159c" exitCode=0 Dec 12 17:02:16 crc kubenswrapper[4693]: I1212 17:02:16.838515 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j47h8" event={"ID":"abc6789a-0731-46ed-8bd2-8ef2770610e8","Type":"ContainerDied","Data":"4dc92ba6c060f803c1eaf822280ef73f871d1f0b4ee194cf7d9304f77a96159c"} Dec 12 17:02:18 crc kubenswrapper[4693]: I1212 17:02:18.872109 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j47h8" event={"ID":"abc6789a-0731-46ed-8bd2-8ef2770610e8","Type":"ContainerStarted","Data":"ac5b00dde2715dfaac8b1ffeebc3ee009f83123006fcfdea39ed507393906fcb"} Dec 12 17:02:18 crc kubenswrapper[4693]: I1212 17:02:18.904865 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j47h8" podStartSLOduration=3.092313416 podStartE2EDuration="8.904842388s" podCreationTimestamp="2025-12-12 17:02:10 +0000 UTC" firstStartedPulling="2025-12-12 17:02:11.76783318 +0000 UTC m=+4558.936472781" lastFinishedPulling="2025-12-12 17:02:17.580362152 +0000 UTC m=+4564.749001753" observedRunningTime="2025-12-12 17:02:18.894966564 +0000 UTC m=+4566.063606165" watchObservedRunningTime="2025-12-12 17:02:18.904842388 +0000 UTC m=+4566.073481999" Dec 12 17:02:20 crc kubenswrapper[4693]: I1212 17:02:20.650557 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j47h8" Dec 12 17:02:20 crc kubenswrapper[4693]: I1212 17:02:20.651978 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j47h8" Dec 12 17:02:21 crc kubenswrapper[4693]: I1212 17:02:21.703856 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j47h8" podUID="abc6789a-0731-46ed-8bd2-8ef2770610e8" containerName="registry-server" probeResult="failure" output=< Dec 12 17:02:21 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:02:21 crc kubenswrapper[4693]: > Dec 12 17:02:22 crc kubenswrapper[4693]: I1212 17:02:22.358237 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 17:02:22 crc kubenswrapper[4693]: E1212 17:02:22.359007 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:02:30 crc kubenswrapper[4693]: I1212 17:02:30.704785 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j47h8" Dec 12 17:02:30 crc kubenswrapper[4693]: I1212 17:02:30.758388 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j47h8" Dec 12 17:02:30 crc kubenswrapper[4693]: I1212 17:02:30.952804 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j47h8"] Dec 12 17:02:32 crc kubenswrapper[4693]: I1212 17:02:32.038868 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j47h8" podUID="abc6789a-0731-46ed-8bd2-8ef2770610e8" containerName="registry-server" containerID="cri-o://ac5b00dde2715dfaac8b1ffeebc3ee009f83123006fcfdea39ed507393906fcb" gracePeriod=2 Dec 12 17:02:32 crc kubenswrapper[4693]: I1212 17:02:32.575105 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j47h8" Dec 12 17:02:32 crc kubenswrapper[4693]: I1212 17:02:32.637689 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcmr5\" (UniqueName: \"kubernetes.io/projected/abc6789a-0731-46ed-8bd2-8ef2770610e8-kube-api-access-jcmr5\") pod \"abc6789a-0731-46ed-8bd2-8ef2770610e8\" (UID: \"abc6789a-0731-46ed-8bd2-8ef2770610e8\") " Dec 12 17:02:32 crc kubenswrapper[4693]: I1212 17:02:32.637775 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc6789a-0731-46ed-8bd2-8ef2770610e8-catalog-content\") pod \"abc6789a-0731-46ed-8bd2-8ef2770610e8\" (UID: \"abc6789a-0731-46ed-8bd2-8ef2770610e8\") " Dec 12 17:02:32 crc kubenswrapper[4693]: I1212 17:02:32.637867 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc6789a-0731-46ed-8bd2-8ef2770610e8-utilities\") pod \"abc6789a-0731-46ed-8bd2-8ef2770610e8\" (UID: \"abc6789a-0731-46ed-8bd2-8ef2770610e8\") " Dec 12 17:02:32 crc kubenswrapper[4693]: I1212 17:02:32.639023 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abc6789a-0731-46ed-8bd2-8ef2770610e8-utilities" (OuterVolumeSpecName: "utilities") pod "abc6789a-0731-46ed-8bd2-8ef2770610e8" (UID: "abc6789a-0731-46ed-8bd2-8ef2770610e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 17:02:32 crc kubenswrapper[4693]: I1212 17:02:32.644439 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abc6789a-0731-46ed-8bd2-8ef2770610e8-kube-api-access-jcmr5" (OuterVolumeSpecName: "kube-api-access-jcmr5") pod "abc6789a-0731-46ed-8bd2-8ef2770610e8" (UID: "abc6789a-0731-46ed-8bd2-8ef2770610e8"). InnerVolumeSpecName "kube-api-access-jcmr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 17:02:32 crc kubenswrapper[4693]: I1212 17:02:32.740265 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcmr5\" (UniqueName: \"kubernetes.io/projected/abc6789a-0731-46ed-8bd2-8ef2770610e8-kube-api-access-jcmr5\") on node \"crc\" DevicePath \"\"" Dec 12 17:02:32 crc kubenswrapper[4693]: I1212 17:02:32.740316 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc6789a-0731-46ed-8bd2-8ef2770610e8-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 17:02:32 crc kubenswrapper[4693]: I1212 17:02:32.748667 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abc6789a-0731-46ed-8bd2-8ef2770610e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abc6789a-0731-46ed-8bd2-8ef2770610e8" (UID: "abc6789a-0731-46ed-8bd2-8ef2770610e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 17:02:32 crc kubenswrapper[4693]: I1212 17:02:32.842685 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc6789a-0731-46ed-8bd2-8ef2770610e8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 17:02:33 crc kubenswrapper[4693]: I1212 17:02:33.069215 4693 generic.go:334] "Generic (PLEG): container finished" podID="abc6789a-0731-46ed-8bd2-8ef2770610e8" containerID="ac5b00dde2715dfaac8b1ffeebc3ee009f83123006fcfdea39ed507393906fcb" exitCode=0 Dec 12 17:02:33 crc kubenswrapper[4693]: I1212 17:02:33.069380 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j47h8" Dec 12 17:02:33 crc kubenswrapper[4693]: I1212 17:02:33.069767 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j47h8" event={"ID":"abc6789a-0731-46ed-8bd2-8ef2770610e8","Type":"ContainerDied","Data":"ac5b00dde2715dfaac8b1ffeebc3ee009f83123006fcfdea39ed507393906fcb"} Dec 12 17:02:33 crc kubenswrapper[4693]: I1212 17:02:33.069805 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j47h8" event={"ID":"abc6789a-0731-46ed-8bd2-8ef2770610e8","Type":"ContainerDied","Data":"0bb6c39393fde9cb2f5cf018664ee937aa44d92e60ab4754c52c8cc6ae3bd4d5"} Dec 12 17:02:33 crc kubenswrapper[4693]: I1212 17:02:33.069851 4693 scope.go:117] "RemoveContainer" containerID="ac5b00dde2715dfaac8b1ffeebc3ee009f83123006fcfdea39ed507393906fcb" Dec 12 17:02:33 crc kubenswrapper[4693]: I1212 17:02:33.103521 4693 scope.go:117] "RemoveContainer" containerID="4dc92ba6c060f803c1eaf822280ef73f871d1f0b4ee194cf7d9304f77a96159c" Dec 12 17:02:33 crc kubenswrapper[4693]: I1212 17:02:33.110390 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j47h8"] Dec 12 17:02:33 crc kubenswrapper[4693]: I1212 17:02:33.120786 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j47h8"] Dec 12 17:02:33 crc kubenswrapper[4693]: I1212 17:02:33.131594 4693 scope.go:117] "RemoveContainer" containerID="052e77bef7479a6540e3aecf3c7895635cf30973d5480204df34f17da9c366bf" Dec 12 17:02:33 crc kubenswrapper[4693]: I1212 17:02:33.220463 4693 scope.go:117] "RemoveContainer" containerID="ac5b00dde2715dfaac8b1ffeebc3ee009f83123006fcfdea39ed507393906fcb" Dec 12 17:02:33 crc kubenswrapper[4693]: E1212 17:02:33.220975 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac5b00dde2715dfaac8b1ffeebc3ee009f83123006fcfdea39ed507393906fcb\": container with ID starting with ac5b00dde2715dfaac8b1ffeebc3ee009f83123006fcfdea39ed507393906fcb not found: ID does not exist" containerID="ac5b00dde2715dfaac8b1ffeebc3ee009f83123006fcfdea39ed507393906fcb" Dec 12 17:02:33 crc kubenswrapper[4693]: I1212 17:02:33.221011 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac5b00dde2715dfaac8b1ffeebc3ee009f83123006fcfdea39ed507393906fcb"} err="failed to get container status \"ac5b00dde2715dfaac8b1ffeebc3ee009f83123006fcfdea39ed507393906fcb\": rpc error: code = NotFound desc = could not find container \"ac5b00dde2715dfaac8b1ffeebc3ee009f83123006fcfdea39ed507393906fcb\": container with ID starting with ac5b00dde2715dfaac8b1ffeebc3ee009f83123006fcfdea39ed507393906fcb not found: ID does not exist" Dec 12 17:02:33 crc kubenswrapper[4693]: I1212 17:02:33.221038 4693 scope.go:117] "RemoveContainer" containerID="4dc92ba6c060f803c1eaf822280ef73f871d1f0b4ee194cf7d9304f77a96159c" Dec 12 17:02:33 crc kubenswrapper[4693]: E1212 17:02:33.221262 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dc92ba6c060f803c1eaf822280ef73f871d1f0b4ee194cf7d9304f77a96159c\": container with ID starting with 4dc92ba6c060f803c1eaf822280ef73f871d1f0b4ee194cf7d9304f77a96159c not found: ID does not exist" containerID="4dc92ba6c060f803c1eaf822280ef73f871d1f0b4ee194cf7d9304f77a96159c" Dec 12 17:02:33 crc kubenswrapper[4693]: I1212 17:02:33.221303 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dc92ba6c060f803c1eaf822280ef73f871d1f0b4ee194cf7d9304f77a96159c"} err="failed to get container status \"4dc92ba6c060f803c1eaf822280ef73f871d1f0b4ee194cf7d9304f77a96159c\": rpc error: code = NotFound desc = could not find container \"4dc92ba6c060f803c1eaf822280ef73f871d1f0b4ee194cf7d9304f77a96159c\": container with ID starting with 4dc92ba6c060f803c1eaf822280ef73f871d1f0b4ee194cf7d9304f77a96159c not found: ID does not exist" Dec 12 17:02:33 crc kubenswrapper[4693]: I1212 17:02:33.221325 4693 scope.go:117] "RemoveContainer" containerID="052e77bef7479a6540e3aecf3c7895635cf30973d5480204df34f17da9c366bf" Dec 12 17:02:33 crc kubenswrapper[4693]: E1212 17:02:33.221554 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"052e77bef7479a6540e3aecf3c7895635cf30973d5480204df34f17da9c366bf\": container with ID starting with 052e77bef7479a6540e3aecf3c7895635cf30973d5480204df34f17da9c366bf not found: ID does not exist" containerID="052e77bef7479a6540e3aecf3c7895635cf30973d5480204df34f17da9c366bf" Dec 12 17:02:33 crc kubenswrapper[4693]: I1212 17:02:33.221579 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"052e77bef7479a6540e3aecf3c7895635cf30973d5480204df34f17da9c366bf"} err="failed to get container status \"052e77bef7479a6540e3aecf3c7895635cf30973d5480204df34f17da9c366bf\": rpc error: code = NotFound desc = could not find container \"052e77bef7479a6540e3aecf3c7895635cf30973d5480204df34f17da9c366bf\": container with ID starting with 052e77bef7479a6540e3aecf3c7895635cf30973d5480204df34f17da9c366bf not found: ID does not exist" Dec 12 17:02:33 crc kubenswrapper[4693]: I1212 17:02:33.375086 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abc6789a-0731-46ed-8bd2-8ef2770610e8" path="/var/lib/kubelet/pods/abc6789a-0731-46ed-8bd2-8ef2770610e8/volumes" Dec 12 17:02:34 crc kubenswrapper[4693]: I1212 17:02:34.357778 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 17:02:34 crc kubenswrapper[4693]: E1212 17:02:34.358413 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:02:47 crc kubenswrapper[4693]: I1212 17:02:47.357333 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 17:02:47 crc kubenswrapper[4693]: E1212 17:02:47.358216 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:03:00 crc kubenswrapper[4693]: I1212 17:03:00.357254 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 17:03:00 crc kubenswrapper[4693]: E1212 17:03:00.358077 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:03:11 crc kubenswrapper[4693]: I1212 17:03:11.357791 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 17:03:11 crc kubenswrapper[4693]: E1212 17:03:11.359048 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:03:12 crc kubenswrapper[4693]: I1212 17:03:12.802617 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q5kv8"] Dec 12 17:03:12 crc kubenswrapper[4693]: E1212 17:03:12.803525 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc6789a-0731-46ed-8bd2-8ef2770610e8" containerName="extract-content" Dec 12 17:03:12 crc kubenswrapper[4693]: I1212 17:03:12.803550 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc6789a-0731-46ed-8bd2-8ef2770610e8" containerName="extract-content" Dec 12 17:03:12 crc kubenswrapper[4693]: E1212 17:03:12.803616 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc6789a-0731-46ed-8bd2-8ef2770610e8" containerName="registry-server" Dec 12 17:03:12 crc kubenswrapper[4693]: I1212 17:03:12.803626 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc6789a-0731-46ed-8bd2-8ef2770610e8" containerName="registry-server" Dec 12 17:03:12 crc kubenswrapper[4693]: E1212 17:03:12.803648 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc6789a-0731-46ed-8bd2-8ef2770610e8" containerName="extract-utilities" Dec 12 17:03:12 crc kubenswrapper[4693]: I1212 17:03:12.803656 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc6789a-0731-46ed-8bd2-8ef2770610e8" containerName="extract-utilities" Dec 12 17:03:12 crc kubenswrapper[4693]: I1212 17:03:12.803934 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="abc6789a-0731-46ed-8bd2-8ef2770610e8" containerName="registry-server" Dec 12 17:03:12 crc kubenswrapper[4693]: I1212 17:03:12.805946 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5kv8" Dec 12 17:03:12 crc kubenswrapper[4693]: I1212 17:03:12.819259 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5kv8"] Dec 12 17:03:12 crc kubenswrapper[4693]: I1212 17:03:12.954898 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70e6b2e3-fa66-432b-b7f8-4d7c4228105a-catalog-content\") pod \"redhat-marketplace-q5kv8\" (UID: \"70e6b2e3-fa66-432b-b7f8-4d7c4228105a\") " pod="openshift-marketplace/redhat-marketplace-q5kv8" Dec 12 17:03:12 crc kubenswrapper[4693]: I1212 17:03:12.955092 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70e6b2e3-fa66-432b-b7f8-4d7c4228105a-utilities\") pod \"redhat-marketplace-q5kv8\" (UID: \"70e6b2e3-fa66-432b-b7f8-4d7c4228105a\") " pod="openshift-marketplace/redhat-marketplace-q5kv8" Dec 12 17:03:12 crc kubenswrapper[4693]: I1212 17:03:12.955154 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsj29\" (UniqueName: \"kubernetes.io/projected/70e6b2e3-fa66-432b-b7f8-4d7c4228105a-kube-api-access-dsj29\") pod \"redhat-marketplace-q5kv8\" (UID: \"70e6b2e3-fa66-432b-b7f8-4d7c4228105a\") " pod="openshift-marketplace/redhat-marketplace-q5kv8" Dec 12 17:03:13 crc kubenswrapper[4693]: I1212 17:03:13.057342 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70e6b2e3-fa66-432b-b7f8-4d7c4228105a-utilities\") pod \"redhat-marketplace-q5kv8\" (UID: \"70e6b2e3-fa66-432b-b7f8-4d7c4228105a\") " pod="openshift-marketplace/redhat-marketplace-q5kv8" Dec 12 17:03:13 crc kubenswrapper[4693]: I1212 17:03:13.057723 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsj29\" (UniqueName: \"kubernetes.io/projected/70e6b2e3-fa66-432b-b7f8-4d7c4228105a-kube-api-access-dsj29\") pod \"redhat-marketplace-q5kv8\" (UID: \"70e6b2e3-fa66-432b-b7f8-4d7c4228105a\") " pod="openshift-marketplace/redhat-marketplace-q5kv8" Dec 12 17:03:13 crc kubenswrapper[4693]: I1212 17:03:13.057942 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70e6b2e3-fa66-432b-b7f8-4d7c4228105a-catalog-content\") pod \"redhat-marketplace-q5kv8\" (UID: \"70e6b2e3-fa66-432b-b7f8-4d7c4228105a\") " pod="openshift-marketplace/redhat-marketplace-q5kv8" Dec 12 17:03:13 crc kubenswrapper[4693]: I1212 17:03:13.058615 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70e6b2e3-fa66-432b-b7f8-4d7c4228105a-catalog-content\") pod \"redhat-marketplace-q5kv8\" (UID: \"70e6b2e3-fa66-432b-b7f8-4d7c4228105a\") " pod="openshift-marketplace/redhat-marketplace-q5kv8" Dec 12 17:03:13 crc kubenswrapper[4693]: I1212 17:03:13.059007 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70e6b2e3-fa66-432b-b7f8-4d7c4228105a-utilities\") pod \"redhat-marketplace-q5kv8\" (UID: \"70e6b2e3-fa66-432b-b7f8-4d7c4228105a\") " pod="openshift-marketplace/redhat-marketplace-q5kv8" Dec 12 17:03:13 crc kubenswrapper[4693]: I1212 17:03:13.092126 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsj29\" (UniqueName: \"kubernetes.io/projected/70e6b2e3-fa66-432b-b7f8-4d7c4228105a-kube-api-access-dsj29\") pod \"redhat-marketplace-q5kv8\" (UID: \"70e6b2e3-fa66-432b-b7f8-4d7c4228105a\") " pod="openshift-marketplace/redhat-marketplace-q5kv8" Dec 12 17:03:13 crc kubenswrapper[4693]: I1212 17:03:13.128235 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5kv8" Dec 12 17:03:13 crc kubenswrapper[4693]: I1212 17:03:13.656764 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5kv8"] Dec 12 17:03:14 crc kubenswrapper[4693]: I1212 17:03:14.599670 4693 generic.go:334] "Generic (PLEG): container finished" podID="70e6b2e3-fa66-432b-b7f8-4d7c4228105a" containerID="ed33c4f19d44486cbba11aa87b8c1490bea6e8e6566e898100b39ed04a00f763" exitCode=0 Dec 12 17:03:14 crc kubenswrapper[4693]: I1212 17:03:14.600662 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5kv8" event={"ID":"70e6b2e3-fa66-432b-b7f8-4d7c4228105a","Type":"ContainerDied","Data":"ed33c4f19d44486cbba11aa87b8c1490bea6e8e6566e898100b39ed04a00f763"} Dec 12 17:03:14 crc kubenswrapper[4693]: I1212 17:03:14.601016 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5kv8" event={"ID":"70e6b2e3-fa66-432b-b7f8-4d7c4228105a","Type":"ContainerStarted","Data":"5aaef08bf9aa24bd1dbc8b50420f259e1f1bfae4a1d8050b67a3e5e4f9cd9ff1"} Dec 12 17:03:15 crc kubenswrapper[4693]: I1212 17:03:15.614449 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5kv8" event={"ID":"70e6b2e3-fa66-432b-b7f8-4d7c4228105a","Type":"ContainerStarted","Data":"646b44d89ffff61e4a922c4d472f664e0b1735d25cc9c03dbc30b8a7bf4d557b"} Dec 12 17:03:16 crc kubenswrapper[4693]: I1212 17:03:16.633352 4693 generic.go:334] "Generic (PLEG): container finished" podID="70e6b2e3-fa66-432b-b7f8-4d7c4228105a" containerID="646b44d89ffff61e4a922c4d472f664e0b1735d25cc9c03dbc30b8a7bf4d557b" exitCode=0 Dec 12 17:03:16 crc kubenswrapper[4693]: I1212 17:03:16.633443 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5kv8" event={"ID":"70e6b2e3-fa66-432b-b7f8-4d7c4228105a","Type":"ContainerDied","Data":"646b44d89ffff61e4a922c4d472f664e0b1735d25cc9c03dbc30b8a7bf4d557b"} Dec 12 17:03:17 crc kubenswrapper[4693]: I1212 17:03:17.648726 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5kv8" event={"ID":"70e6b2e3-fa66-432b-b7f8-4d7c4228105a","Type":"ContainerStarted","Data":"c29ad57e388bafd65c0499f99a899e2ee04b5329e3d59f0382d740906e1bbea7"} Dec 12 17:03:17 crc kubenswrapper[4693]: I1212 17:03:17.672358 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q5kv8" podStartSLOduration=3.1440746920000002 podStartE2EDuration="5.672341931s" podCreationTimestamp="2025-12-12 17:03:12 +0000 UTC" firstStartedPulling="2025-12-12 17:03:14.602146202 +0000 UTC m=+4621.770785803" lastFinishedPulling="2025-12-12 17:03:17.13041343 +0000 UTC m=+4624.299053042" observedRunningTime="2025-12-12 17:03:17.669224378 +0000 UTC m=+4624.837863989" watchObservedRunningTime="2025-12-12 17:03:17.672341931 +0000 UTC m=+4624.840981532" Dec 12 17:03:23 crc kubenswrapper[4693]: I1212 17:03:23.129482 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q5kv8" Dec 12 17:03:23 crc kubenswrapper[4693]: I1212 17:03:23.130137 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q5kv8" Dec 12 17:03:23 crc kubenswrapper[4693]: I1212 17:03:23.184149 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q5kv8" Dec 12 17:03:23 crc kubenswrapper[4693]: I1212 17:03:23.780143 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q5kv8" Dec 12 17:03:23 crc kubenswrapper[4693]: I1212 17:03:23.912367 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5kv8"] Dec 12 17:03:24 crc kubenswrapper[4693]: I1212 17:03:24.356950 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 17:03:24 crc kubenswrapper[4693]: E1212 17:03:24.357316 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:03:25 crc kubenswrapper[4693]: I1212 17:03:25.752521 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q5kv8" podUID="70e6b2e3-fa66-432b-b7f8-4d7c4228105a" containerName="registry-server" containerID="cri-o://c29ad57e388bafd65c0499f99a899e2ee04b5329e3d59f0382d740906e1bbea7" gracePeriod=2 Dec 12 17:03:26 crc kubenswrapper[4693]: I1212 17:03:26.384237 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5kv8" Dec 12 17:03:26 crc kubenswrapper[4693]: I1212 17:03:26.550249 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70e6b2e3-fa66-432b-b7f8-4d7c4228105a-catalog-content\") pod \"70e6b2e3-fa66-432b-b7f8-4d7c4228105a\" (UID: \"70e6b2e3-fa66-432b-b7f8-4d7c4228105a\") " Dec 12 17:03:26 crc kubenswrapper[4693]: I1212 17:03:26.550463 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70e6b2e3-fa66-432b-b7f8-4d7c4228105a-utilities\") pod \"70e6b2e3-fa66-432b-b7f8-4d7c4228105a\" (UID: \"70e6b2e3-fa66-432b-b7f8-4d7c4228105a\") " Dec 12 17:03:26 crc kubenswrapper[4693]: I1212 17:03:26.550662 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsj29\" (UniqueName: \"kubernetes.io/projected/70e6b2e3-fa66-432b-b7f8-4d7c4228105a-kube-api-access-dsj29\") pod \"70e6b2e3-fa66-432b-b7f8-4d7c4228105a\" (UID: \"70e6b2e3-fa66-432b-b7f8-4d7c4228105a\") " Dec 12 17:03:26 crc kubenswrapper[4693]: I1212 17:03:26.551402 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70e6b2e3-fa66-432b-b7f8-4d7c4228105a-utilities" (OuterVolumeSpecName: "utilities") pod "70e6b2e3-fa66-432b-b7f8-4d7c4228105a" (UID: "70e6b2e3-fa66-432b-b7f8-4d7c4228105a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 17:03:26 crc kubenswrapper[4693]: I1212 17:03:26.551709 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70e6b2e3-fa66-432b-b7f8-4d7c4228105a-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 17:03:26 crc kubenswrapper[4693]: I1212 17:03:26.575161 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70e6b2e3-fa66-432b-b7f8-4d7c4228105a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70e6b2e3-fa66-432b-b7f8-4d7c4228105a" (UID: "70e6b2e3-fa66-432b-b7f8-4d7c4228105a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 17:03:26 crc kubenswrapper[4693]: I1212 17:03:26.654021 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70e6b2e3-fa66-432b-b7f8-4d7c4228105a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 17:03:26 crc kubenswrapper[4693]: I1212 17:03:26.764362 4693 generic.go:334] "Generic (PLEG): container finished" podID="70e6b2e3-fa66-432b-b7f8-4d7c4228105a" containerID="c29ad57e388bafd65c0499f99a899e2ee04b5329e3d59f0382d740906e1bbea7" exitCode=0 Dec 12 17:03:26 crc kubenswrapper[4693]: I1212 17:03:26.764415 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5kv8" event={"ID":"70e6b2e3-fa66-432b-b7f8-4d7c4228105a","Type":"ContainerDied","Data":"c29ad57e388bafd65c0499f99a899e2ee04b5329e3d59f0382d740906e1bbea7"} Dec 12 17:03:26 crc kubenswrapper[4693]: I1212 17:03:26.764439 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5kv8" Dec 12 17:03:26 crc kubenswrapper[4693]: I1212 17:03:26.764458 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5kv8" event={"ID":"70e6b2e3-fa66-432b-b7f8-4d7c4228105a","Type":"ContainerDied","Data":"5aaef08bf9aa24bd1dbc8b50420f259e1f1bfae4a1d8050b67a3e5e4f9cd9ff1"} Dec 12 17:03:26 crc kubenswrapper[4693]: I1212 17:03:26.764480 4693 scope.go:117] "RemoveContainer" containerID="c29ad57e388bafd65c0499f99a899e2ee04b5329e3d59f0382d740906e1bbea7" Dec 12 17:03:26 crc kubenswrapper[4693]: I1212 17:03:26.793347 4693 scope.go:117] "RemoveContainer" containerID="646b44d89ffff61e4a922c4d472f664e0b1735d25cc9c03dbc30b8a7bf4d557b" Dec 12 17:03:27 crc kubenswrapper[4693]: I1212 17:03:27.191572 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e6b2e3-fa66-432b-b7f8-4d7c4228105a-kube-api-access-dsj29" (OuterVolumeSpecName: "kube-api-access-dsj29") pod "70e6b2e3-fa66-432b-b7f8-4d7c4228105a" (UID: "70e6b2e3-fa66-432b-b7f8-4d7c4228105a"). InnerVolumeSpecName "kube-api-access-dsj29". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 17:03:27 crc kubenswrapper[4693]: I1212 17:03:27.204920 4693 scope.go:117] "RemoveContainer" containerID="ed33c4f19d44486cbba11aa87b8c1490bea6e8e6566e898100b39ed04a00f763" Dec 12 17:03:27 crc kubenswrapper[4693]: I1212 17:03:27.273765 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsj29\" (UniqueName: \"kubernetes.io/projected/70e6b2e3-fa66-432b-b7f8-4d7c4228105a-kube-api-access-dsj29\") on node \"crc\" DevicePath \"\"" Dec 12 17:03:27 crc kubenswrapper[4693]: I1212 17:03:27.322385 4693 scope.go:117] "RemoveContainer" containerID="c29ad57e388bafd65c0499f99a899e2ee04b5329e3d59f0382d740906e1bbea7" Dec 12 17:03:27 crc kubenswrapper[4693]: E1212 17:03:27.322968 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c29ad57e388bafd65c0499f99a899e2ee04b5329e3d59f0382d740906e1bbea7\": container with ID starting with c29ad57e388bafd65c0499f99a899e2ee04b5329e3d59f0382d740906e1bbea7 not found: ID does not exist" containerID="c29ad57e388bafd65c0499f99a899e2ee04b5329e3d59f0382d740906e1bbea7" Dec 12 17:03:27 crc kubenswrapper[4693]: I1212 17:03:27.323241 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c29ad57e388bafd65c0499f99a899e2ee04b5329e3d59f0382d740906e1bbea7"} err="failed to get container status \"c29ad57e388bafd65c0499f99a899e2ee04b5329e3d59f0382d740906e1bbea7\": rpc error: code = NotFound desc = could not find container \"c29ad57e388bafd65c0499f99a899e2ee04b5329e3d59f0382d740906e1bbea7\": container with ID starting with c29ad57e388bafd65c0499f99a899e2ee04b5329e3d59f0382d740906e1bbea7 not found: ID does not exist" Dec 12 17:03:27 crc kubenswrapper[4693]: I1212 17:03:27.323351 4693 scope.go:117] "RemoveContainer" containerID="646b44d89ffff61e4a922c4d472f664e0b1735d25cc9c03dbc30b8a7bf4d557b" Dec 12 17:03:27 crc kubenswrapper[4693]: E1212 17:03:27.323855 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"646b44d89ffff61e4a922c4d472f664e0b1735d25cc9c03dbc30b8a7bf4d557b\": container with ID starting with 646b44d89ffff61e4a922c4d472f664e0b1735d25cc9c03dbc30b8a7bf4d557b not found: ID does not exist" containerID="646b44d89ffff61e4a922c4d472f664e0b1735d25cc9c03dbc30b8a7bf4d557b" Dec 12 17:03:27 crc kubenswrapper[4693]: I1212 17:03:27.323928 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"646b44d89ffff61e4a922c4d472f664e0b1735d25cc9c03dbc30b8a7bf4d557b"} err="failed to get container status \"646b44d89ffff61e4a922c4d472f664e0b1735d25cc9c03dbc30b8a7bf4d557b\": rpc error: code = NotFound desc = could not find container \"646b44d89ffff61e4a922c4d472f664e0b1735d25cc9c03dbc30b8a7bf4d557b\": container with ID starting with 646b44d89ffff61e4a922c4d472f664e0b1735d25cc9c03dbc30b8a7bf4d557b not found: ID does not exist" Dec 12 17:03:27 crc kubenswrapper[4693]: I1212 17:03:27.323957 4693 scope.go:117] "RemoveContainer" containerID="ed33c4f19d44486cbba11aa87b8c1490bea6e8e6566e898100b39ed04a00f763" Dec 12 17:03:27 crc kubenswrapper[4693]: E1212 17:03:27.324342 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed33c4f19d44486cbba11aa87b8c1490bea6e8e6566e898100b39ed04a00f763\": container with ID starting with ed33c4f19d44486cbba11aa87b8c1490bea6e8e6566e898100b39ed04a00f763 not found: ID does not exist" containerID="ed33c4f19d44486cbba11aa87b8c1490bea6e8e6566e898100b39ed04a00f763" Dec 12 17:03:27 crc kubenswrapper[4693]: I1212 17:03:27.324444 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed33c4f19d44486cbba11aa87b8c1490bea6e8e6566e898100b39ed04a00f763"} err="failed to get container status \"ed33c4f19d44486cbba11aa87b8c1490bea6e8e6566e898100b39ed04a00f763\": rpc error: code = NotFound desc = could not find container \"ed33c4f19d44486cbba11aa87b8c1490bea6e8e6566e898100b39ed04a00f763\": container with ID starting with ed33c4f19d44486cbba11aa87b8c1490bea6e8e6566e898100b39ed04a00f763 not found: ID does not exist" Dec 12 17:03:27 crc kubenswrapper[4693]: I1212 17:03:27.411494 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5kv8"] Dec 12 17:03:27 crc kubenswrapper[4693]: I1212 17:03:27.425400 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5kv8"] Dec 12 17:03:29 crc kubenswrapper[4693]: I1212 17:03:29.372055 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70e6b2e3-fa66-432b-b7f8-4d7c4228105a" path="/var/lib/kubelet/pods/70e6b2e3-fa66-432b-b7f8-4d7c4228105a/volumes" Dec 12 17:03:39 crc kubenswrapper[4693]: I1212 17:03:39.357432 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 17:03:39 crc kubenswrapper[4693]: E1212 17:03:39.358694 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:03:43 crc kubenswrapper[4693]: I1212 17:03:43.924476 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vrtp4"] Dec 12 17:03:43 crc kubenswrapper[4693]: E1212 17:03:43.927332 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e6b2e3-fa66-432b-b7f8-4d7c4228105a" containerName="registry-server" Dec 12 17:03:43 crc kubenswrapper[4693]: I1212 17:03:43.927352 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e6b2e3-fa66-432b-b7f8-4d7c4228105a" containerName="registry-server" Dec 12 17:03:43 crc kubenswrapper[4693]: E1212 17:03:43.927399 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e6b2e3-fa66-432b-b7f8-4d7c4228105a" containerName="extract-utilities" Dec 12 17:03:43 crc kubenswrapper[4693]: I1212 17:03:43.927407 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e6b2e3-fa66-432b-b7f8-4d7c4228105a" containerName="extract-utilities" Dec 12 17:03:43 crc kubenswrapper[4693]: E1212 17:03:43.927425 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e6b2e3-fa66-432b-b7f8-4d7c4228105a" containerName="extract-content" Dec 12 17:03:43 crc kubenswrapper[4693]: I1212 17:03:43.927433 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e6b2e3-fa66-432b-b7f8-4d7c4228105a" containerName="extract-content" Dec 12 17:03:43 crc kubenswrapper[4693]: I1212 17:03:43.927715 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="70e6b2e3-fa66-432b-b7f8-4d7c4228105a" containerName="registry-server" Dec 12 17:03:43 crc kubenswrapper[4693]: I1212 17:03:43.929711 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vrtp4" Dec 12 17:03:43 crc kubenswrapper[4693]: I1212 17:03:43.949635 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vrtp4"] Dec 12 17:03:43 crc kubenswrapper[4693]: I1212 17:03:43.996872 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5577\" (UniqueName: \"kubernetes.io/projected/b9ba4ddf-8c47-4c5e-9fd3-997df7b82265-kube-api-access-x5577\") pod \"community-operators-vrtp4\" (UID: \"b9ba4ddf-8c47-4c5e-9fd3-997df7b82265\") " pod="openshift-marketplace/community-operators-vrtp4" Dec 12 17:03:43 crc kubenswrapper[4693]: I1212 17:03:43.996985 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ba4ddf-8c47-4c5e-9fd3-997df7b82265-utilities\") pod \"community-operators-vrtp4\" (UID: \"b9ba4ddf-8c47-4c5e-9fd3-997df7b82265\") " pod="openshift-marketplace/community-operators-vrtp4" Dec 12 17:03:43 crc kubenswrapper[4693]: I1212 17:03:43.997060 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ba4ddf-8c47-4c5e-9fd3-997df7b82265-catalog-content\") pod \"community-operators-vrtp4\" (UID: \"b9ba4ddf-8c47-4c5e-9fd3-997df7b82265\") " pod="openshift-marketplace/community-operators-vrtp4" Dec 12 17:03:44 crc kubenswrapper[4693]: I1212 17:03:44.102339 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ba4ddf-8c47-4c5e-9fd3-997df7b82265-utilities\") pod \"community-operators-vrtp4\" (UID: \"b9ba4ddf-8c47-4c5e-9fd3-997df7b82265\") " pod="openshift-marketplace/community-operators-vrtp4" Dec 12 17:03:44 crc kubenswrapper[4693]: I1212 17:03:44.102480 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ba4ddf-8c47-4c5e-9fd3-997df7b82265-catalog-content\") pod \"community-operators-vrtp4\" (UID: \"b9ba4ddf-8c47-4c5e-9fd3-997df7b82265\") " pod="openshift-marketplace/community-operators-vrtp4" Dec 12 17:03:44 crc kubenswrapper[4693]: I1212 17:03:44.102643 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5577\" (UniqueName: \"kubernetes.io/projected/b9ba4ddf-8c47-4c5e-9fd3-997df7b82265-kube-api-access-x5577\") pod \"community-operators-vrtp4\" (UID: \"b9ba4ddf-8c47-4c5e-9fd3-997df7b82265\") " pod="openshift-marketplace/community-operators-vrtp4" Dec 12 17:03:44 crc kubenswrapper[4693]: I1212 17:03:44.102866 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ba4ddf-8c47-4c5e-9fd3-997df7b82265-utilities\") pod \"community-operators-vrtp4\" (UID: \"b9ba4ddf-8c47-4c5e-9fd3-997df7b82265\") " pod="openshift-marketplace/community-operators-vrtp4" Dec 12 17:03:44 crc kubenswrapper[4693]: I1212 17:03:44.102973 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ba4ddf-8c47-4c5e-9fd3-997df7b82265-catalog-content\") pod \"community-operators-vrtp4\" (UID: \"b9ba4ddf-8c47-4c5e-9fd3-997df7b82265\") " pod="openshift-marketplace/community-operators-vrtp4" Dec 12 17:03:44 crc kubenswrapper[4693]: I1212 17:03:44.126012 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5577\" (UniqueName: \"kubernetes.io/projected/b9ba4ddf-8c47-4c5e-9fd3-997df7b82265-kube-api-access-x5577\") pod \"community-operators-vrtp4\" (UID: \"b9ba4ddf-8c47-4c5e-9fd3-997df7b82265\") " pod="openshift-marketplace/community-operators-vrtp4" Dec 12 17:03:44 crc kubenswrapper[4693]: I1212 17:03:44.260202 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vrtp4" Dec 12 17:03:44 crc kubenswrapper[4693]: I1212 17:03:44.846921 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vrtp4"] Dec 12 17:03:45 crc kubenswrapper[4693]: I1212 17:03:45.016692 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrtp4" event={"ID":"b9ba4ddf-8c47-4c5e-9fd3-997df7b82265","Type":"ContainerStarted","Data":"64e52ec7d7d448d6e94f93651165a9f4c883e95b90d7f415e5bd6e82f603f934"} Dec 12 17:03:46 crc kubenswrapper[4693]: I1212 17:03:46.032314 4693 generic.go:334] "Generic (PLEG): container finished" podID="b9ba4ddf-8c47-4c5e-9fd3-997df7b82265" containerID="c442d7457cb5909b2f766f7fb4a1eb3af720d1752e7000ed0ad74883d8eed9e5" exitCode=0 Dec 12 17:03:46 crc kubenswrapper[4693]: I1212 17:03:46.032440 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrtp4" event={"ID":"b9ba4ddf-8c47-4c5e-9fd3-997df7b82265","Type":"ContainerDied","Data":"c442d7457cb5909b2f766f7fb4a1eb3af720d1752e7000ed0ad74883d8eed9e5"} Dec 12 17:03:48 crc kubenswrapper[4693]: I1212 17:03:48.060521 4693 generic.go:334] "Generic (PLEG): container finished" podID="b9ba4ddf-8c47-4c5e-9fd3-997df7b82265" containerID="4cb924723077f1555fd8f81a02c25b9b3b9b92798e1a2715ccf018ff3a50f56e" exitCode=0 Dec 12 17:03:48 crc kubenswrapper[4693]: I1212 17:03:48.060617 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrtp4" event={"ID":"b9ba4ddf-8c47-4c5e-9fd3-997df7b82265","Type":"ContainerDied","Data":"4cb924723077f1555fd8f81a02c25b9b3b9b92798e1a2715ccf018ff3a50f56e"} Dec 12 17:03:49 crc kubenswrapper[4693]: I1212 17:03:49.074060 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrtp4" event={"ID":"b9ba4ddf-8c47-4c5e-9fd3-997df7b82265","Type":"ContainerStarted","Data":"0991a638e42a45e56184c7cad8a60b8af4062d2a7bc9374b40d3a8054df41d4f"} Dec 12 17:03:49 crc kubenswrapper[4693]: I1212 17:03:49.107418 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vrtp4" podStartSLOduration=3.544887837 podStartE2EDuration="6.10738181s" podCreationTimestamp="2025-12-12 17:03:43 +0000 UTC" firstStartedPulling="2025-12-12 17:03:46.035742323 +0000 UTC m=+4653.204381924" lastFinishedPulling="2025-12-12 17:03:48.598236276 +0000 UTC m=+4655.766875897" observedRunningTime="2025-12-12 17:03:49.090708594 +0000 UTC m=+4656.259348195" watchObservedRunningTime="2025-12-12 17:03:49.10738181 +0000 UTC m=+4656.276021411" Dec 12 17:03:54 crc kubenswrapper[4693]: I1212 17:03:54.261165 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vrtp4" Dec 12 17:03:54 crc kubenswrapper[4693]: I1212 17:03:54.261800 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vrtp4" Dec 12 17:03:54 crc kubenswrapper[4693]: I1212 17:03:54.325982 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vrtp4" Dec 12 17:03:54 crc kubenswrapper[4693]: I1212 17:03:54.357769 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 17:03:54 crc kubenswrapper[4693]: E1212 17:03:54.358093 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:03:55 crc kubenswrapper[4693]: I1212 17:03:55.240734 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vrtp4" Dec 12 17:03:55 crc kubenswrapper[4693]: I1212 17:03:55.301423 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vrtp4"] Dec 12 17:03:57 crc kubenswrapper[4693]: I1212 17:03:57.181770 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vrtp4" podUID="b9ba4ddf-8c47-4c5e-9fd3-997df7b82265" containerName="registry-server" containerID="cri-o://0991a638e42a45e56184c7cad8a60b8af4062d2a7bc9374b40d3a8054df41d4f" gracePeriod=2 Dec 12 17:03:57 crc kubenswrapper[4693]: I1212 17:03:57.757329 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vrtp4" Dec 12 17:03:57 crc kubenswrapper[4693]: I1212 17:03:57.896235 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5577\" (UniqueName: \"kubernetes.io/projected/b9ba4ddf-8c47-4c5e-9fd3-997df7b82265-kube-api-access-x5577\") pod \"b9ba4ddf-8c47-4c5e-9fd3-997df7b82265\" (UID: \"b9ba4ddf-8c47-4c5e-9fd3-997df7b82265\") " Dec 12 17:03:57 crc kubenswrapper[4693]: I1212 17:03:57.896493 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ba4ddf-8c47-4c5e-9fd3-997df7b82265-catalog-content\") pod \"b9ba4ddf-8c47-4c5e-9fd3-997df7b82265\" (UID: \"b9ba4ddf-8c47-4c5e-9fd3-997df7b82265\") " Dec 12 17:03:57 crc kubenswrapper[4693]: I1212 17:03:57.896524 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ba4ddf-8c47-4c5e-9fd3-997df7b82265-utilities\") pod \"b9ba4ddf-8c47-4c5e-9fd3-997df7b82265\" (UID: \"b9ba4ddf-8c47-4c5e-9fd3-997df7b82265\") " Dec 12 17:03:57 crc kubenswrapper[4693]: I1212 17:03:57.897826 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9ba4ddf-8c47-4c5e-9fd3-997df7b82265-utilities" (OuterVolumeSpecName: "utilities") pod "b9ba4ddf-8c47-4c5e-9fd3-997df7b82265" (UID: "b9ba4ddf-8c47-4c5e-9fd3-997df7b82265"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 17:03:57 crc kubenswrapper[4693]: I1212 17:03:57.912750 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9ba4ddf-8c47-4c5e-9fd3-997df7b82265-kube-api-access-x5577" (OuterVolumeSpecName: "kube-api-access-x5577") pod "b9ba4ddf-8c47-4c5e-9fd3-997df7b82265" (UID: "b9ba4ddf-8c47-4c5e-9fd3-997df7b82265"). InnerVolumeSpecName "kube-api-access-x5577". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 17:03:57 crc kubenswrapper[4693]: I1212 17:03:57.980227 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9ba4ddf-8c47-4c5e-9fd3-997df7b82265-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9ba4ddf-8c47-4c5e-9fd3-997df7b82265" (UID: "b9ba4ddf-8c47-4c5e-9fd3-997df7b82265"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 17:03:57 crc kubenswrapper[4693]: I1212 17:03:57.999419 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5577\" (UniqueName: \"kubernetes.io/projected/b9ba4ddf-8c47-4c5e-9fd3-997df7b82265-kube-api-access-x5577\") on node \"crc\" DevicePath \"\"" Dec 12 17:03:57 crc kubenswrapper[4693]: I1212 17:03:57.999459 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ba4ddf-8c47-4c5e-9fd3-997df7b82265-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 17:03:57 crc kubenswrapper[4693]: I1212 17:03:57.999472 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ba4ddf-8c47-4c5e-9fd3-997df7b82265-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 17:03:58 crc kubenswrapper[4693]: I1212 17:03:58.198636 4693 generic.go:334] "Generic (PLEG): container finished" podID="b9ba4ddf-8c47-4c5e-9fd3-997df7b82265" containerID="0991a638e42a45e56184c7cad8a60b8af4062d2a7bc9374b40d3a8054df41d4f" exitCode=0 Dec 12 17:03:58 crc kubenswrapper[4693]: I1212 17:03:58.198691 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrtp4" event={"ID":"b9ba4ddf-8c47-4c5e-9fd3-997df7b82265","Type":"ContainerDied","Data":"0991a638e42a45e56184c7cad8a60b8af4062d2a7bc9374b40d3a8054df41d4f"} Dec 12 17:03:58 crc kubenswrapper[4693]: I1212 17:03:58.198720 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrtp4" event={"ID":"b9ba4ddf-8c47-4c5e-9fd3-997df7b82265","Type":"ContainerDied","Data":"64e52ec7d7d448d6e94f93651165a9f4c883e95b90d7f415e5bd6e82f603f934"} Dec 12 17:03:58 crc kubenswrapper[4693]: I1212 17:03:58.198741 4693 scope.go:117] "RemoveContainer" containerID="0991a638e42a45e56184c7cad8a60b8af4062d2a7bc9374b40d3a8054df41d4f" Dec 12 17:03:58 crc kubenswrapper[4693]: I1212 17:03:58.199016 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vrtp4" Dec 12 17:03:58 crc kubenswrapper[4693]: I1212 17:03:58.232155 4693 scope.go:117] "RemoveContainer" containerID="4cb924723077f1555fd8f81a02c25b9b3b9b92798e1a2715ccf018ff3a50f56e" Dec 12 17:03:58 crc kubenswrapper[4693]: I1212 17:03:58.277952 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vrtp4"] Dec 12 17:03:58 crc kubenswrapper[4693]: I1212 17:03:58.278676 4693 scope.go:117] "RemoveContainer" containerID="c442d7457cb5909b2f766f7fb4a1eb3af720d1752e7000ed0ad74883d8eed9e5" Dec 12 17:03:58 crc kubenswrapper[4693]: I1212 17:03:58.289682 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vrtp4"] Dec 12 17:03:58 crc kubenswrapper[4693]: I1212 17:03:58.340631 4693 scope.go:117] "RemoveContainer" containerID="0991a638e42a45e56184c7cad8a60b8af4062d2a7bc9374b40d3a8054df41d4f" Dec 12 17:03:58 crc kubenswrapper[4693]: E1212 17:03:58.341543 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0991a638e42a45e56184c7cad8a60b8af4062d2a7bc9374b40d3a8054df41d4f\": container with ID starting with 0991a638e42a45e56184c7cad8a60b8af4062d2a7bc9374b40d3a8054df41d4f not found: ID does not exist" containerID="0991a638e42a45e56184c7cad8a60b8af4062d2a7bc9374b40d3a8054df41d4f" Dec 12 17:03:58 crc kubenswrapper[4693]: I1212 17:03:58.341583 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0991a638e42a45e56184c7cad8a60b8af4062d2a7bc9374b40d3a8054df41d4f"} err="failed to get container status \"0991a638e42a45e56184c7cad8a60b8af4062d2a7bc9374b40d3a8054df41d4f\": rpc error: code = NotFound desc = could not find container \"0991a638e42a45e56184c7cad8a60b8af4062d2a7bc9374b40d3a8054df41d4f\": container with ID starting with 0991a638e42a45e56184c7cad8a60b8af4062d2a7bc9374b40d3a8054df41d4f not found: ID does not exist" Dec 12 17:03:58 crc kubenswrapper[4693]: I1212 17:03:58.341608 4693 scope.go:117] "RemoveContainer" containerID="4cb924723077f1555fd8f81a02c25b9b3b9b92798e1a2715ccf018ff3a50f56e" Dec 12 17:03:58 crc kubenswrapper[4693]: E1212 17:03:58.341876 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cb924723077f1555fd8f81a02c25b9b3b9b92798e1a2715ccf018ff3a50f56e\": container with ID starting with 4cb924723077f1555fd8f81a02c25b9b3b9b92798e1a2715ccf018ff3a50f56e not found: ID does not exist" containerID="4cb924723077f1555fd8f81a02c25b9b3b9b92798e1a2715ccf018ff3a50f56e" Dec 12 17:03:58 crc kubenswrapper[4693]: I1212 17:03:58.341945 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cb924723077f1555fd8f81a02c25b9b3b9b92798e1a2715ccf018ff3a50f56e"} err="failed to get container status \"4cb924723077f1555fd8f81a02c25b9b3b9b92798e1a2715ccf018ff3a50f56e\": rpc error: code = NotFound desc = could not find container \"4cb924723077f1555fd8f81a02c25b9b3b9b92798e1a2715ccf018ff3a50f56e\": container with ID starting with 4cb924723077f1555fd8f81a02c25b9b3b9b92798e1a2715ccf018ff3a50f56e not found: ID does not exist" Dec 12 17:03:58 crc kubenswrapper[4693]: I1212 17:03:58.341960 4693 scope.go:117] "RemoveContainer" containerID="c442d7457cb5909b2f766f7fb4a1eb3af720d1752e7000ed0ad74883d8eed9e5" Dec 12 17:03:58 crc kubenswrapper[4693]: E1212 17:03:58.344054 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c442d7457cb5909b2f766f7fb4a1eb3af720d1752e7000ed0ad74883d8eed9e5\": container with ID starting with c442d7457cb5909b2f766f7fb4a1eb3af720d1752e7000ed0ad74883d8eed9e5 not found: ID does not exist" containerID="c442d7457cb5909b2f766f7fb4a1eb3af720d1752e7000ed0ad74883d8eed9e5" Dec 12 17:03:58 crc kubenswrapper[4693]: I1212 17:03:58.344107 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c442d7457cb5909b2f766f7fb4a1eb3af720d1752e7000ed0ad74883d8eed9e5"} err="failed to get container status \"c442d7457cb5909b2f766f7fb4a1eb3af720d1752e7000ed0ad74883d8eed9e5\": rpc error: code = NotFound desc = could not find container \"c442d7457cb5909b2f766f7fb4a1eb3af720d1752e7000ed0ad74883d8eed9e5\": container with ID starting with c442d7457cb5909b2f766f7fb4a1eb3af720d1752e7000ed0ad74883d8eed9e5 not found: ID does not exist" Dec 12 17:03:59 crc kubenswrapper[4693]: I1212 17:03:59.369923 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9ba4ddf-8c47-4c5e-9fd3-997df7b82265" path="/var/lib/kubelet/pods/b9ba4ddf-8c47-4c5e-9fd3-997df7b82265/volumes" Dec 12 17:04:07 crc kubenswrapper[4693]: I1212 17:04:07.010789 4693 trace.go:236] Trace[155405694]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (12-Dec-2025 17:04:05.961) (total time: 1048ms): Dec 12 17:04:07 crc kubenswrapper[4693]: Trace[155405694]: [1.048137276s] [1.048137276s] END Dec 12 17:04:09 crc kubenswrapper[4693]: I1212 17:04:09.361838 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 17:04:09 crc kubenswrapper[4693]: E1212 17:04:09.362741 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:04:17 crc kubenswrapper[4693]: E1212 17:04:17.151626 4693 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.204:54418->38.102.83.204:43805: write tcp 38.102.83.204:54418->38.102.83.204:43805: write: broken pipe Dec 12 17:04:21 crc kubenswrapper[4693]: I1212 17:04:21.357245 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 17:04:22 crc kubenswrapper[4693]: I1212 17:04:22.529006 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerStarted","Data":"54dc121b266eab792eddc96104876374fe8371d8cf861d63462f9c9f36c164fd"} Dec 12 17:06:40 crc kubenswrapper[4693]: I1212 17:06:40.938258 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rkxk6"] Dec 12 17:06:40 crc kubenswrapper[4693]: E1212 17:06:40.941780 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ba4ddf-8c47-4c5e-9fd3-997df7b82265" containerName="extract-content" Dec 12 17:06:40 crc kubenswrapper[4693]: I1212 17:06:40.941938 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ba4ddf-8c47-4c5e-9fd3-997df7b82265" containerName="extract-content" Dec 12 17:06:40 crc kubenswrapper[4693]: E1212 17:06:40.942063 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ba4ddf-8c47-4c5e-9fd3-997df7b82265" containerName="registry-server" Dec 12 17:06:40 crc kubenswrapper[4693]: I1212 17:06:40.942147 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ba4ddf-8c47-4c5e-9fd3-997df7b82265" containerName="registry-server" Dec 12 17:06:40 crc kubenswrapper[4693]: E1212 17:06:40.942261 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ba4ddf-8c47-4c5e-9fd3-997df7b82265" containerName="extract-utilities" Dec 12 17:06:40 crc kubenswrapper[4693]: I1212 17:06:40.942378 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ba4ddf-8c47-4c5e-9fd3-997df7b82265" containerName="extract-utilities" Dec 12 17:06:40 crc kubenswrapper[4693]: I1212 17:06:40.944403 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9ba4ddf-8c47-4c5e-9fd3-997df7b82265" containerName="registry-server" Dec 12 17:06:40 crc kubenswrapper[4693]: I1212 17:06:40.947832 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rkxk6" Dec 12 17:06:40 crc kubenswrapper[4693]: I1212 17:06:40.968162 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rkxk6"] Dec 12 17:06:41 crc kubenswrapper[4693]: I1212 17:06:41.097844 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxp44\" (UniqueName: \"kubernetes.io/projected/b9eac199-db3e-431d-9d0c-2c05134adb60-kube-api-access-cxp44\") pod \"certified-operators-rkxk6\" (UID: \"b9eac199-db3e-431d-9d0c-2c05134adb60\") " pod="openshift-marketplace/certified-operators-rkxk6" Dec 12 17:06:41 crc kubenswrapper[4693]: I1212 17:06:41.098332 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9eac199-db3e-431d-9d0c-2c05134adb60-catalog-content\") pod \"certified-operators-rkxk6\" (UID: \"b9eac199-db3e-431d-9d0c-2c05134adb60\") " pod="openshift-marketplace/certified-operators-rkxk6" Dec 12 17:06:41 crc kubenswrapper[4693]: I1212 17:06:41.098582 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9eac199-db3e-431d-9d0c-2c05134adb60-utilities\") pod \"certified-operators-rkxk6\" (UID: \"b9eac199-db3e-431d-9d0c-2c05134adb60\") " pod="openshift-marketplace/certified-operators-rkxk6" Dec 12 17:06:41 crc kubenswrapper[4693]: I1212 17:06:41.200952 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxp44\" (UniqueName: \"kubernetes.io/projected/b9eac199-db3e-431d-9d0c-2c05134adb60-kube-api-access-cxp44\") pod \"certified-operators-rkxk6\" (UID: \"b9eac199-db3e-431d-9d0c-2c05134adb60\") " pod="openshift-marketplace/certified-operators-rkxk6" Dec 12 17:06:41 crc kubenswrapper[4693]: I1212 17:06:41.201074 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9eac199-db3e-431d-9d0c-2c05134adb60-catalog-content\") pod \"certified-operators-rkxk6\" (UID: \"b9eac199-db3e-431d-9d0c-2c05134adb60\") " pod="openshift-marketplace/certified-operators-rkxk6" Dec 12 17:06:41 crc kubenswrapper[4693]: I1212 17:06:41.201125 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9eac199-db3e-431d-9d0c-2c05134adb60-utilities\") pod \"certified-operators-rkxk6\" (UID: \"b9eac199-db3e-431d-9d0c-2c05134adb60\") " pod="openshift-marketplace/certified-operators-rkxk6" Dec 12 17:06:41 crc kubenswrapper[4693]: I1212 17:06:41.201727 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9eac199-db3e-431d-9d0c-2c05134adb60-catalog-content\") pod \"certified-operators-rkxk6\" (UID: \"b9eac199-db3e-431d-9d0c-2c05134adb60\") " pod="openshift-marketplace/certified-operators-rkxk6" Dec 12 17:06:41 crc kubenswrapper[4693]: I1212 17:06:41.201731 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9eac199-db3e-431d-9d0c-2c05134adb60-utilities\") pod \"certified-operators-rkxk6\" (UID: \"b9eac199-db3e-431d-9d0c-2c05134adb60\") " pod="openshift-marketplace/certified-operators-rkxk6" Dec 12 17:06:41 crc kubenswrapper[4693]: I1212 17:06:41.227057 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxp44\" (UniqueName: \"kubernetes.io/projected/b9eac199-db3e-431d-9d0c-2c05134adb60-kube-api-access-cxp44\") pod \"certified-operators-rkxk6\" (UID: \"b9eac199-db3e-431d-9d0c-2c05134adb60\") " pod="openshift-marketplace/certified-operators-rkxk6" Dec 12 17:06:41 crc kubenswrapper[4693]: I1212 17:06:41.269890 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rkxk6" Dec 12 17:06:41 crc kubenswrapper[4693]: I1212 17:06:41.885229 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rkxk6"] Dec 12 17:06:42 crc kubenswrapper[4693]: I1212 17:06:42.381242 4693 generic.go:334] "Generic (PLEG): container finished" podID="b9eac199-db3e-431d-9d0c-2c05134adb60" containerID="3397f1f192bb4367fc19cef3fc36da700d73887858328150e824253b827ab026" exitCode=0 Dec 12 17:06:42 crc kubenswrapper[4693]: I1212 17:06:42.381313 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkxk6" event={"ID":"b9eac199-db3e-431d-9d0c-2c05134adb60","Type":"ContainerDied","Data":"3397f1f192bb4367fc19cef3fc36da700d73887858328150e824253b827ab026"} Dec 12 17:06:42 crc kubenswrapper[4693]: I1212 17:06:42.381359 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkxk6" event={"ID":"b9eac199-db3e-431d-9d0c-2c05134adb60","Type":"ContainerStarted","Data":"43d043a49ef8f1db930815b66b3b8c692d1ace6b06ff65f0508081cbca05e914"} Dec 12 17:06:42 crc kubenswrapper[4693]: I1212 17:06:42.530810 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 17:06:42 crc kubenswrapper[4693]: I1212 17:06:42.531240 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 17:06:43 crc kubenswrapper[4693]: I1212 17:06:43.408985 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkxk6" event={"ID":"b9eac199-db3e-431d-9d0c-2c05134adb60","Type":"ContainerStarted","Data":"9198f5908e5836d25f7764bbb6e6c148f98fe49deebf3740aa03644f7039fd80"} Dec 12 17:06:45 crc kubenswrapper[4693]: I1212 17:06:45.435956 4693 generic.go:334] "Generic (PLEG): container finished" podID="b9eac199-db3e-431d-9d0c-2c05134adb60" containerID="9198f5908e5836d25f7764bbb6e6c148f98fe49deebf3740aa03644f7039fd80" exitCode=0 Dec 12 17:06:45 crc kubenswrapper[4693]: I1212 17:06:45.436024 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkxk6" event={"ID":"b9eac199-db3e-431d-9d0c-2c05134adb60","Type":"ContainerDied","Data":"9198f5908e5836d25f7764bbb6e6c148f98fe49deebf3740aa03644f7039fd80"} Dec 12 17:06:46 crc kubenswrapper[4693]: I1212 17:06:46.457868 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkxk6" event={"ID":"b9eac199-db3e-431d-9d0c-2c05134adb60","Type":"ContainerStarted","Data":"29007d75ac75fdf731aebc97fe358cbe274c5288a473699d40072f50cf9348a9"} Dec 12 17:06:46 crc kubenswrapper[4693]: I1212 17:06:46.488912 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rkxk6" podStartSLOduration=2.782874332 podStartE2EDuration="6.488894218s" podCreationTimestamp="2025-12-12 17:06:40 +0000 UTC" firstStartedPulling="2025-12-12 17:06:42.384999225 +0000 UTC m=+4829.553638826" lastFinishedPulling="2025-12-12 17:06:46.091019101 +0000 UTC m=+4833.259658712" observedRunningTime="2025-12-12 17:06:46.477237847 +0000 UTC m=+4833.645877458" watchObservedRunningTime="2025-12-12 17:06:46.488894218 +0000 UTC m=+4833.657533819" Dec 12 17:06:51 crc kubenswrapper[4693]: I1212 17:06:51.270323 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rkxk6" Dec 12 17:06:51 crc kubenswrapper[4693]: I1212 17:06:51.271079 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rkxk6" Dec 12 17:06:51 crc kubenswrapper[4693]: I1212 17:06:51.378085 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rkxk6" Dec 12 17:06:51 crc kubenswrapper[4693]: I1212 17:06:51.568719 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rkxk6" Dec 12 17:06:51 crc kubenswrapper[4693]: I1212 17:06:51.629624 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rkxk6"] Dec 12 17:06:53 crc kubenswrapper[4693]: I1212 17:06:53.544998 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rkxk6" podUID="b9eac199-db3e-431d-9d0c-2c05134adb60" containerName="registry-server" containerID="cri-o://29007d75ac75fdf731aebc97fe358cbe274c5288a473699d40072f50cf9348a9" gracePeriod=2 Dec 12 17:06:54 crc kubenswrapper[4693]: I1212 17:06:54.559873 4693 generic.go:334] "Generic (PLEG): container finished" podID="b9eac199-db3e-431d-9d0c-2c05134adb60" containerID="29007d75ac75fdf731aebc97fe358cbe274c5288a473699d40072f50cf9348a9" exitCode=0 Dec 12 17:06:54 crc kubenswrapper[4693]: I1212 17:06:54.559962 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkxk6" event={"ID":"b9eac199-db3e-431d-9d0c-2c05134adb60","Type":"ContainerDied","Data":"29007d75ac75fdf731aebc97fe358cbe274c5288a473699d40072f50cf9348a9"} Dec 12 17:06:54 crc kubenswrapper[4693]: I1212 17:06:54.560648 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkxk6" event={"ID":"b9eac199-db3e-431d-9d0c-2c05134adb60","Type":"ContainerDied","Data":"43d043a49ef8f1db930815b66b3b8c692d1ace6b06ff65f0508081cbca05e914"} Dec 12 17:06:54 crc kubenswrapper[4693]: I1212 17:06:54.560695 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43d043a49ef8f1db930815b66b3b8c692d1ace6b06ff65f0508081cbca05e914" Dec 12 17:06:54 crc kubenswrapper[4693]: I1212 17:06:54.638486 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rkxk6" Dec 12 17:06:54 crc kubenswrapper[4693]: I1212 17:06:54.659492 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9eac199-db3e-431d-9d0c-2c05134adb60-catalog-content\") pod \"b9eac199-db3e-431d-9d0c-2c05134adb60\" (UID: \"b9eac199-db3e-431d-9d0c-2c05134adb60\") " Dec 12 17:06:54 crc kubenswrapper[4693]: I1212 17:06:54.659626 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxp44\" (UniqueName: \"kubernetes.io/projected/b9eac199-db3e-431d-9d0c-2c05134adb60-kube-api-access-cxp44\") pod \"b9eac199-db3e-431d-9d0c-2c05134adb60\" (UID: \"b9eac199-db3e-431d-9d0c-2c05134adb60\") " Dec 12 17:06:54 crc kubenswrapper[4693]: I1212 17:06:54.659710 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9eac199-db3e-431d-9d0c-2c05134adb60-utilities\") pod \"b9eac199-db3e-431d-9d0c-2c05134adb60\" (UID: \"b9eac199-db3e-431d-9d0c-2c05134adb60\") " Dec 12 17:06:54 crc kubenswrapper[4693]: I1212 17:06:54.661697 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9eac199-db3e-431d-9d0c-2c05134adb60-utilities" (OuterVolumeSpecName: "utilities") pod "b9eac199-db3e-431d-9d0c-2c05134adb60" (UID: "b9eac199-db3e-431d-9d0c-2c05134adb60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 17:06:54 crc kubenswrapper[4693]: I1212 17:06:54.693458 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9eac199-db3e-431d-9d0c-2c05134adb60-kube-api-access-cxp44" (OuterVolumeSpecName: "kube-api-access-cxp44") pod "b9eac199-db3e-431d-9d0c-2c05134adb60" (UID: "b9eac199-db3e-431d-9d0c-2c05134adb60"). InnerVolumeSpecName "kube-api-access-cxp44". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 17:06:54 crc kubenswrapper[4693]: I1212 17:06:54.732385 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9eac199-db3e-431d-9d0c-2c05134adb60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9eac199-db3e-431d-9d0c-2c05134adb60" (UID: "b9eac199-db3e-431d-9d0c-2c05134adb60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 17:06:54 crc kubenswrapper[4693]: I1212 17:06:54.765430 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxp44\" (UniqueName: \"kubernetes.io/projected/b9eac199-db3e-431d-9d0c-2c05134adb60-kube-api-access-cxp44\") on node \"crc\" DevicePath \"\"" Dec 12 17:06:54 crc kubenswrapper[4693]: I1212 17:06:54.765890 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9eac199-db3e-431d-9d0c-2c05134adb60-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 17:06:54 crc kubenswrapper[4693]: I1212 17:06:54.765912 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9eac199-db3e-431d-9d0c-2c05134adb60-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 17:06:55 crc kubenswrapper[4693]: I1212 17:06:55.582120 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rkxk6" Dec 12 17:06:55 crc kubenswrapper[4693]: I1212 17:06:55.642934 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rkxk6"] Dec 12 17:06:55 crc kubenswrapper[4693]: I1212 17:06:55.663112 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rkxk6"] Dec 12 17:06:57 crc kubenswrapper[4693]: I1212 17:06:57.373169 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9eac199-db3e-431d-9d0c-2c05134adb60" path="/var/lib/kubelet/pods/b9eac199-db3e-431d-9d0c-2c05134adb60/volumes" Dec 12 17:07:12 crc kubenswrapper[4693]: I1212 17:07:12.531045 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 17:07:12 crc kubenswrapper[4693]: I1212 17:07:12.532043 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 17:07:21 crc kubenswrapper[4693]: E1212 17:07:21.306457 4693 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.204:53680->38.102.83.204:43805: write tcp 38.102.83.204:53680->38.102.83.204:43805: write: connection reset by peer Dec 12 17:07:42 crc kubenswrapper[4693]: I1212 17:07:42.530413 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 17:07:42 crc kubenswrapper[4693]: I1212 17:07:42.531269 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 17:07:42 crc kubenswrapper[4693]: I1212 17:07:42.531350 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 17:07:42 crc kubenswrapper[4693]: I1212 17:07:42.532602 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"54dc121b266eab792eddc96104876374fe8371d8cf861d63462f9c9f36c164fd"} pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 17:07:42 crc kubenswrapper[4693]: I1212 17:07:42.532670 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" containerID="cri-o://54dc121b266eab792eddc96104876374fe8371d8cf861d63462f9c9f36c164fd" gracePeriod=600 Dec 12 17:07:43 crc kubenswrapper[4693]: I1212 17:07:43.240777 4693 generic.go:334] "Generic (PLEG): container finished" podID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerID="54dc121b266eab792eddc96104876374fe8371d8cf861d63462f9c9f36c164fd" exitCode=0 Dec 12 17:07:43 crc kubenswrapper[4693]: I1212 17:07:43.241009 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerDied","Data":"54dc121b266eab792eddc96104876374fe8371d8cf861d63462f9c9f36c164fd"} Dec 12 17:07:43 crc kubenswrapper[4693]: I1212 17:07:43.241335 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerStarted","Data":"5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb"} Dec 12 17:07:43 crc kubenswrapper[4693]: I1212 17:07:43.241405 4693 scope.go:117] "RemoveContainer" containerID="5fe358b29dbdd45a650f6c7421984a835ceccd05f1f907e81a1d5530adcf5e90" Dec 12 17:08:50 crc kubenswrapper[4693]: I1212 17:08:50.788768 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-z5g4l" podUID="df1e4454-429c-4d2a-b372-b33ee0e88e6b" containerName="nmstate-handler" probeResult="failure" output="command timed out" Dec 12 17:09:42 crc kubenswrapper[4693]: I1212 17:09:42.530378 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 17:09:42 crc kubenswrapper[4693]: I1212 17:09:42.532415 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 17:10:12 crc kubenswrapper[4693]: I1212 17:10:12.530721 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 17:10:12 crc kubenswrapper[4693]: I1212 17:10:12.531448 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 17:10:42 crc kubenswrapper[4693]: I1212 17:10:42.530654 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 17:10:42 crc kubenswrapper[4693]: I1212 17:10:42.531621 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 17:10:42 crc kubenswrapper[4693]: I1212 17:10:42.531706 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" Dec 12 17:10:42 crc kubenswrapper[4693]: I1212 17:10:42.533010 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb"} pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 17:10:42 crc kubenswrapper[4693]: I1212 17:10:42.533136 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" containerID="cri-o://5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" gracePeriod=600 Dec 12 17:10:42 crc kubenswrapper[4693]: E1212 17:10:42.659421 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:10:43 crc kubenswrapper[4693]: I1212 17:10:43.631887 4693 generic.go:334] "Generic (PLEG): container finished" podID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" exitCode=0 Dec 12 17:10:43 crc kubenswrapper[4693]: I1212 17:10:43.631983 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerDied","Data":"5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb"} Dec 12 17:10:43 crc kubenswrapper[4693]: I1212 17:10:43.632381 4693 scope.go:117] "RemoveContainer" containerID="54dc121b266eab792eddc96104876374fe8371d8cf861d63462f9c9f36c164fd" Dec 12 17:10:43 crc kubenswrapper[4693]: I1212 17:10:43.633355 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:10:43 crc kubenswrapper[4693]: E1212 17:10:43.633715 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:10:58 crc kubenswrapper[4693]: I1212 17:10:58.356877 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:10:58 crc kubenswrapper[4693]: E1212 17:10:58.357771 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:11:11 crc kubenswrapper[4693]: I1212 17:11:11.358003 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:11:11 crc kubenswrapper[4693]: E1212 17:11:11.359047 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:11:22 crc kubenswrapper[4693]: I1212 17:11:22.357998 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:11:22 crc kubenswrapper[4693]: E1212 17:11:22.358901 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:11:35 crc kubenswrapper[4693]: I1212 17:11:35.357432 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:11:35 crc kubenswrapper[4693]: E1212 17:11:35.358735 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:11:49 crc kubenswrapper[4693]: I1212 17:11:49.358582 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:11:49 crc kubenswrapper[4693]: E1212 17:11:49.359693 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:12:02 crc kubenswrapper[4693]: I1212 17:12:02.357601 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:12:02 crc kubenswrapper[4693]: E1212 17:12:02.358440 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:12:13 crc kubenswrapper[4693]: I1212 17:12:13.533237 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gfx84"] Dec 12 17:12:13 crc kubenswrapper[4693]: E1212 17:12:13.534900 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9eac199-db3e-431d-9d0c-2c05134adb60" containerName="extract-content" Dec 12 17:12:13 crc kubenswrapper[4693]: I1212 17:12:13.534938 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9eac199-db3e-431d-9d0c-2c05134adb60" containerName="extract-content" Dec 12 17:12:13 crc kubenswrapper[4693]: E1212 17:12:13.534981 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9eac199-db3e-431d-9d0c-2c05134adb60" containerName="registry-server" Dec 12 17:12:13 crc kubenswrapper[4693]: I1212 17:12:13.534994 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9eac199-db3e-431d-9d0c-2c05134adb60" containerName="registry-server" Dec 12 17:12:13 crc kubenswrapper[4693]: E1212 17:12:13.535043 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9eac199-db3e-431d-9d0c-2c05134adb60" containerName="extract-utilities" Dec 12 17:12:13 crc kubenswrapper[4693]: I1212 17:12:13.535057 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9eac199-db3e-431d-9d0c-2c05134adb60" containerName="extract-utilities" Dec 12 17:12:13 crc kubenswrapper[4693]: I1212 17:12:13.535515 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9eac199-db3e-431d-9d0c-2c05134adb60" containerName="registry-server" Dec 12 17:12:13 crc kubenswrapper[4693]: I1212 17:12:13.539762 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gfx84" Dec 12 17:12:13 crc kubenswrapper[4693]: I1212 17:12:13.552483 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gfx84"] Dec 12 17:12:13 crc kubenswrapper[4693]: I1212 17:12:13.678450 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab74865d-a053-4480-9f46-826e991244d5-catalog-content\") pod \"redhat-operators-gfx84\" (UID: \"ab74865d-a053-4480-9f46-826e991244d5\") " pod="openshift-marketplace/redhat-operators-gfx84" Dec 12 17:12:13 crc kubenswrapper[4693]: I1212 17:12:13.678749 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab74865d-a053-4480-9f46-826e991244d5-utilities\") pod \"redhat-operators-gfx84\" (UID: \"ab74865d-a053-4480-9f46-826e991244d5\") " pod="openshift-marketplace/redhat-operators-gfx84" Dec 12 17:12:13 crc kubenswrapper[4693]: I1212 17:12:13.678829 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7g8s\" (UniqueName: \"kubernetes.io/projected/ab74865d-a053-4480-9f46-826e991244d5-kube-api-access-t7g8s\") pod \"redhat-operators-gfx84\" (UID: \"ab74865d-a053-4480-9f46-826e991244d5\") " pod="openshift-marketplace/redhat-operators-gfx84" Dec 12 17:12:13 crc kubenswrapper[4693]: I1212 17:12:13.782506 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab74865d-a053-4480-9f46-826e991244d5-utilities\") pod \"redhat-operators-gfx84\" (UID: \"ab74865d-a053-4480-9f46-826e991244d5\") " pod="openshift-marketplace/redhat-operators-gfx84" Dec 12 17:12:13 crc kubenswrapper[4693]: I1212 17:12:13.782607 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7g8s\" (UniqueName: \"kubernetes.io/projected/ab74865d-a053-4480-9f46-826e991244d5-kube-api-access-t7g8s\") pod \"redhat-operators-gfx84\" (UID: \"ab74865d-a053-4480-9f46-826e991244d5\") " pod="openshift-marketplace/redhat-operators-gfx84" Dec 12 17:12:13 crc kubenswrapper[4693]: I1212 17:12:13.783085 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab74865d-a053-4480-9f46-826e991244d5-utilities\") pod \"redhat-operators-gfx84\" (UID: \"ab74865d-a053-4480-9f46-826e991244d5\") " pod="openshift-marketplace/redhat-operators-gfx84" Dec 12 17:12:13 crc kubenswrapper[4693]: I1212 17:12:13.783346 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab74865d-a053-4480-9f46-826e991244d5-catalog-content\") pod \"redhat-operators-gfx84\" (UID: \"ab74865d-a053-4480-9f46-826e991244d5\") " pod="openshift-marketplace/redhat-operators-gfx84" Dec 12 17:12:13 crc kubenswrapper[4693]: I1212 17:12:13.783804 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab74865d-a053-4480-9f46-826e991244d5-catalog-content\") pod \"redhat-operators-gfx84\" (UID: \"ab74865d-a053-4480-9f46-826e991244d5\") " pod="openshift-marketplace/redhat-operators-gfx84" Dec 12 17:12:13 crc kubenswrapper[4693]: I1212 17:12:13.808390 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7g8s\" (UniqueName: \"kubernetes.io/projected/ab74865d-a053-4480-9f46-826e991244d5-kube-api-access-t7g8s\") pod \"redhat-operators-gfx84\" (UID: \"ab74865d-a053-4480-9f46-826e991244d5\") " pod="openshift-marketplace/redhat-operators-gfx84" Dec 12 17:12:13 crc kubenswrapper[4693]: I1212 17:12:13.871260 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gfx84" Dec 12 17:12:14 crc kubenswrapper[4693]: I1212 17:12:14.592087 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gfx84"] Dec 12 17:12:15 crc kubenswrapper[4693]: I1212 17:12:15.760839 4693 generic.go:334] "Generic (PLEG): container finished" podID="ab74865d-a053-4480-9f46-826e991244d5" containerID="cc5a8725b365acd4605dd7c5155af87e4d5bd1cc30781e0ea7d2d71c5bc53f58" exitCode=0 Dec 12 17:12:15 crc kubenswrapper[4693]: I1212 17:12:15.760900 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gfx84" event={"ID":"ab74865d-a053-4480-9f46-826e991244d5","Type":"ContainerDied","Data":"cc5a8725b365acd4605dd7c5155af87e4d5bd1cc30781e0ea7d2d71c5bc53f58"} Dec 12 17:12:15 crc kubenswrapper[4693]: I1212 17:12:15.761433 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gfx84" event={"ID":"ab74865d-a053-4480-9f46-826e991244d5","Type":"ContainerStarted","Data":"af00e6663bfa2b6f487acc49678696e476cde78bc2c7829225d7fe2fb3ae8958"} Dec 12 17:12:15 crc kubenswrapper[4693]: I1212 17:12:15.765020 4693 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 17:12:16 crc kubenswrapper[4693]: I1212 17:12:16.778579 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gfx84" event={"ID":"ab74865d-a053-4480-9f46-826e991244d5","Type":"ContainerStarted","Data":"3ddcccb5f767ce20b07f463d6ac70c59284cc60f45373870d7553d9593c8d57e"} Dec 12 17:12:17 crc kubenswrapper[4693]: I1212 17:12:17.374146 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:12:17 crc kubenswrapper[4693]: E1212 17:12:17.377145 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:12:20 crc kubenswrapper[4693]: I1212 17:12:20.836970 4693 generic.go:334] "Generic (PLEG): container finished" podID="ab74865d-a053-4480-9f46-826e991244d5" containerID="3ddcccb5f767ce20b07f463d6ac70c59284cc60f45373870d7553d9593c8d57e" exitCode=0 Dec 12 17:12:20 crc kubenswrapper[4693]: I1212 17:12:20.837080 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gfx84" event={"ID":"ab74865d-a053-4480-9f46-826e991244d5","Type":"ContainerDied","Data":"3ddcccb5f767ce20b07f463d6ac70c59284cc60f45373870d7553d9593c8d57e"} Dec 12 17:12:21 crc kubenswrapper[4693]: I1212 17:12:21.852997 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gfx84" event={"ID":"ab74865d-a053-4480-9f46-826e991244d5","Type":"ContainerStarted","Data":"9e9946516bb930a908ddc6cb03d8f07ba23f77af3ae84118431a933168cc0af9"} Dec 12 17:12:21 crc kubenswrapper[4693]: I1212 17:12:21.883099 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gfx84" podStartSLOduration=3.061676903 podStartE2EDuration="8.883078835s" podCreationTimestamp="2025-12-12 17:12:13 +0000 UTC" firstStartedPulling="2025-12-12 17:12:15.764694143 +0000 UTC m=+5162.933333754" lastFinishedPulling="2025-12-12 17:12:21.586096085 +0000 UTC m=+5168.754735686" observedRunningTime="2025-12-12 17:12:21.878265136 +0000 UTC m=+5169.046904737" watchObservedRunningTime="2025-12-12 17:12:21.883078835 +0000 UTC m=+5169.051718456" Dec 12 17:12:23 crc kubenswrapper[4693]: I1212 17:12:23.872690 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gfx84" Dec 12 17:12:23 crc kubenswrapper[4693]: I1212 17:12:23.873303 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gfx84" Dec 12 17:12:24 crc kubenswrapper[4693]: I1212 17:12:24.952147 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gfx84" podUID="ab74865d-a053-4480-9f46-826e991244d5" containerName="registry-server" probeResult="failure" output=< Dec 12 17:12:24 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:12:24 crc kubenswrapper[4693]: > Dec 12 17:12:31 crc kubenswrapper[4693]: I1212 17:12:31.357667 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:12:31 crc kubenswrapper[4693]: E1212 17:12:31.358351 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:12:34 crc kubenswrapper[4693]: I1212 17:12:34.931837 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gfx84" podUID="ab74865d-a053-4480-9f46-826e991244d5" containerName="registry-server" probeResult="failure" output=< Dec 12 17:12:34 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:12:34 crc kubenswrapper[4693]: > Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.634988 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.637144 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.639849 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.639935 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zbjwh" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.639934 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.641309 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.648146 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.721838 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzgbd\" (UniqueName: \"kubernetes.io/projected/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-kube-api-access-dzgbd\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.721922 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-config-data\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.722036 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.722067 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.722187 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.722213 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.722245 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.722321 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.722587 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.828729 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.828858 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.828968 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.829105 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzgbd\" (UniqueName: \"kubernetes.io/projected/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-kube-api-access-dzgbd\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.829180 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-config-data\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.829303 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.829336 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.829525 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.829561 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.834197 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-config-data\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.834562 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.835090 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.836042 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.837358 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.838130 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.847313 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.852989 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.858335 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzgbd\" (UniqueName: \"kubernetes.io/projected/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-kube-api-access-dzgbd\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:37 crc kubenswrapper[4693]: I1212 17:12:37.898851 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " pod="openstack/tempest-tests-tempest" Dec 12 17:12:38 crc kubenswrapper[4693]: I1212 17:12:38.000509 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 12 17:12:39 crc kubenswrapper[4693]: I1212 17:12:39.386315 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 12 17:12:39 crc kubenswrapper[4693]: W1212 17:12:39.392379 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa2d51d9_8a13_4b35_846e_3f2e1fa7c64b.slice/crio-5ea910e026b7d47ea38f0a9cd65fb4bf21b77b368b7cb054a12214ab21768a3e WatchSource:0}: Error finding container 5ea910e026b7d47ea38f0a9cd65fb4bf21b77b368b7cb054a12214ab21768a3e: Status 404 returned error can't find the container with id 5ea910e026b7d47ea38f0a9cd65fb4bf21b77b368b7cb054a12214ab21768a3e Dec 12 17:12:40 crc kubenswrapper[4693]: I1212 17:12:40.069195 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b","Type":"ContainerStarted","Data":"5ea910e026b7d47ea38f0a9cd65fb4bf21b77b368b7cb054a12214ab21768a3e"} Dec 12 17:12:43 crc kubenswrapper[4693]: I1212 17:12:43.369900 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:12:43 crc kubenswrapper[4693]: E1212 17:12:43.371314 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:12:43 crc kubenswrapper[4693]: I1212 17:12:43.930053 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gfx84" Dec 12 17:12:43 crc kubenswrapper[4693]: I1212 17:12:43.984991 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gfx84" Dec 12 17:12:44 crc kubenswrapper[4693]: I1212 17:12:44.800634 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gfx84"] Dec 12 17:12:45 crc kubenswrapper[4693]: I1212 17:12:45.134246 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gfx84" podUID="ab74865d-a053-4480-9f46-826e991244d5" containerName="registry-server" containerID="cri-o://9e9946516bb930a908ddc6cb03d8f07ba23f77af3ae84118431a933168cc0af9" gracePeriod=2 Dec 12 17:12:46 crc kubenswrapper[4693]: I1212 17:12:46.159514 4693 generic.go:334] "Generic (PLEG): container finished" podID="ab74865d-a053-4480-9f46-826e991244d5" containerID="9e9946516bb930a908ddc6cb03d8f07ba23f77af3ae84118431a933168cc0af9" exitCode=0 Dec 12 17:12:46 crc kubenswrapper[4693]: I1212 17:12:46.159603 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gfx84" event={"ID":"ab74865d-a053-4480-9f46-826e991244d5","Type":"ContainerDied","Data":"9e9946516bb930a908ddc6cb03d8f07ba23f77af3ae84118431a933168cc0af9"} Dec 12 17:12:48 crc kubenswrapper[4693]: I1212 17:12:48.342348 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gfx84" Dec 12 17:12:48 crc kubenswrapper[4693]: I1212 17:12:48.449501 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7g8s\" (UniqueName: \"kubernetes.io/projected/ab74865d-a053-4480-9f46-826e991244d5-kube-api-access-t7g8s\") pod \"ab74865d-a053-4480-9f46-826e991244d5\" (UID: \"ab74865d-a053-4480-9f46-826e991244d5\") " Dec 12 17:12:48 crc kubenswrapper[4693]: I1212 17:12:48.449630 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab74865d-a053-4480-9f46-826e991244d5-utilities\") pod \"ab74865d-a053-4480-9f46-826e991244d5\" (UID: \"ab74865d-a053-4480-9f46-826e991244d5\") " Dec 12 17:12:48 crc kubenswrapper[4693]: I1212 17:12:48.449728 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab74865d-a053-4480-9f46-826e991244d5-catalog-content\") pod \"ab74865d-a053-4480-9f46-826e991244d5\" (UID: \"ab74865d-a053-4480-9f46-826e991244d5\") " Dec 12 17:12:48 crc kubenswrapper[4693]: I1212 17:12:48.450771 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab74865d-a053-4480-9f46-826e991244d5-utilities" (OuterVolumeSpecName: "utilities") pod "ab74865d-a053-4480-9f46-826e991244d5" (UID: "ab74865d-a053-4480-9f46-826e991244d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 17:12:48 crc kubenswrapper[4693]: I1212 17:12:48.453106 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab74865d-a053-4480-9f46-826e991244d5-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 17:12:48 crc kubenswrapper[4693]: I1212 17:12:48.459406 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab74865d-a053-4480-9f46-826e991244d5-kube-api-access-t7g8s" (OuterVolumeSpecName: "kube-api-access-t7g8s") pod "ab74865d-a053-4480-9f46-826e991244d5" (UID: "ab74865d-a053-4480-9f46-826e991244d5"). InnerVolumeSpecName "kube-api-access-t7g8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 17:12:48 crc kubenswrapper[4693]: I1212 17:12:48.555944 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7g8s\" (UniqueName: \"kubernetes.io/projected/ab74865d-a053-4480-9f46-826e991244d5-kube-api-access-t7g8s\") on node \"crc\" DevicePath \"\"" Dec 12 17:12:48 crc kubenswrapper[4693]: I1212 17:12:48.577901 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab74865d-a053-4480-9f46-826e991244d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab74865d-a053-4480-9f46-826e991244d5" (UID: "ab74865d-a053-4480-9f46-826e991244d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 17:12:48 crc kubenswrapper[4693]: I1212 17:12:48.659212 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab74865d-a053-4480-9f46-826e991244d5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 17:12:49 crc kubenswrapper[4693]: I1212 17:12:49.206675 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gfx84" event={"ID":"ab74865d-a053-4480-9f46-826e991244d5","Type":"ContainerDied","Data":"af00e6663bfa2b6f487acc49678696e476cde78bc2c7829225d7fe2fb3ae8958"} Dec 12 17:12:49 crc kubenswrapper[4693]: I1212 17:12:49.206788 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gfx84" Dec 12 17:12:49 crc kubenswrapper[4693]: I1212 17:12:49.207043 4693 scope.go:117] "RemoveContainer" containerID="9e9946516bb930a908ddc6cb03d8f07ba23f77af3ae84118431a933168cc0af9" Dec 12 17:12:49 crc kubenswrapper[4693]: I1212 17:12:49.234942 4693 scope.go:117] "RemoveContainer" containerID="3ddcccb5f767ce20b07f463d6ac70c59284cc60f45373870d7553d9593c8d57e" Dec 12 17:12:49 crc kubenswrapper[4693]: I1212 17:12:49.262873 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gfx84"] Dec 12 17:12:49 crc kubenswrapper[4693]: I1212 17:12:49.279778 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gfx84"] Dec 12 17:12:49 crc kubenswrapper[4693]: I1212 17:12:49.284973 4693 scope.go:117] "RemoveContainer" containerID="cc5a8725b365acd4605dd7c5155af87e4d5bd1cc30781e0ea7d2d71c5bc53f58" Dec 12 17:12:49 crc kubenswrapper[4693]: I1212 17:12:49.379058 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab74865d-a053-4480-9f46-826e991244d5" path="/var/lib/kubelet/pods/ab74865d-a053-4480-9f46-826e991244d5/volumes" Dec 12 17:12:56 crc kubenswrapper[4693]: I1212 17:12:56.357656 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:12:56 crc kubenswrapper[4693]: E1212 17:12:56.358341 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:13:02 crc kubenswrapper[4693]: I1212 17:13:02.410703 4693 scope.go:117] "RemoveContainer" containerID="3397f1f192bb4367fc19cef3fc36da700d73887858328150e824253b827ab026" Dec 12 17:13:11 crc kubenswrapper[4693]: I1212 17:13:11.358820 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:13:11 crc kubenswrapper[4693]: E1212 17:13:11.360004 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:13:23 crc kubenswrapper[4693]: I1212 17:13:23.015637 4693 scope.go:117] "RemoveContainer" containerID="29007d75ac75fdf731aebc97fe358cbe274c5288a473699d40072f50cf9348a9" Dec 12 17:13:23 crc kubenswrapper[4693]: E1212 17:13:23.048844 4693 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Dec 12 17:13:23 crc kubenswrapper[4693]: E1212 17:13:23.050002 4693 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzgbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 17:13:23 crc kubenswrapper[4693]: E1212 17:13:23.051250 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b" Dec 12 17:13:23 crc kubenswrapper[4693]: I1212 17:13:23.062499 4693 scope.go:117] "RemoveContainer" containerID="9198f5908e5836d25f7764bbb6e6c148f98fe49deebf3740aa03644f7039fd80" Dec 12 17:13:23 crc kubenswrapper[4693]: I1212 17:13:23.375166 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:13:23 crc kubenswrapper[4693]: E1212 17:13:23.376248 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:13:23 crc kubenswrapper[4693]: E1212 17:13:23.661649 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b" Dec 12 17:13:36 crc kubenswrapper[4693]: I1212 17:13:36.914265 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 12 17:13:38 crc kubenswrapper[4693]: I1212 17:13:38.365706 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:13:38 crc kubenswrapper[4693]: E1212 17:13:38.367846 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:13:38 crc kubenswrapper[4693]: I1212 17:13:38.903578 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b","Type":"ContainerStarted","Data":"4fe49fad2e4e5ff4139c2cad3eeff20628a090db56175e1013c05f052514b9db"} Dec 12 17:13:38 crc kubenswrapper[4693]: I1212 17:13:38.940348 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.4253651640000005 podStartE2EDuration="1m2.940328401s" podCreationTimestamp="2025-12-12 17:12:36 +0000 UTC" firstStartedPulling="2025-12-12 17:12:39.396301507 +0000 UTC m=+5186.564941108" lastFinishedPulling="2025-12-12 17:13:36.911264714 +0000 UTC m=+5244.079904345" observedRunningTime="2025-12-12 17:13:38.925918628 +0000 UTC m=+5246.094558269" watchObservedRunningTime="2025-12-12 17:13:38.940328401 +0000 UTC m=+5246.108968012" Dec 12 17:13:49 crc kubenswrapper[4693]: I1212 17:13:49.346853 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hrdrz"] Dec 12 17:13:49 crc kubenswrapper[4693]: E1212 17:13:49.348023 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab74865d-a053-4480-9f46-826e991244d5" containerName="extract-content" Dec 12 17:13:49 crc kubenswrapper[4693]: I1212 17:13:49.348042 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab74865d-a053-4480-9f46-826e991244d5" containerName="extract-content" Dec 12 17:13:49 crc kubenswrapper[4693]: E1212 17:13:49.348073 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab74865d-a053-4480-9f46-826e991244d5" containerName="extract-utilities" Dec 12 17:13:49 crc kubenswrapper[4693]: I1212 17:13:49.348083 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab74865d-a053-4480-9f46-826e991244d5" containerName="extract-utilities" Dec 12 17:13:49 crc kubenswrapper[4693]: E1212 17:13:49.348100 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab74865d-a053-4480-9f46-826e991244d5" containerName="registry-server" Dec 12 17:13:49 crc kubenswrapper[4693]: I1212 17:13:49.348110 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab74865d-a053-4480-9f46-826e991244d5" containerName="registry-server" Dec 12 17:13:49 crc kubenswrapper[4693]: I1212 17:13:49.348454 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab74865d-a053-4480-9f46-826e991244d5" containerName="registry-server" Dec 12 17:13:49 crc kubenswrapper[4693]: I1212 17:13:49.350491 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrdrz" Dec 12 17:13:49 crc kubenswrapper[4693]: I1212 17:13:49.358089 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:13:49 crc kubenswrapper[4693]: E1212 17:13:49.358492 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:13:49 crc kubenswrapper[4693]: I1212 17:13:49.380762 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrdrz"] Dec 12 17:13:49 crc kubenswrapper[4693]: I1212 17:13:49.517777 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b132dd51-eb8c-4214-81e5-c646aa820e70-utilities\") pod \"redhat-marketplace-hrdrz\" (UID: \"b132dd51-eb8c-4214-81e5-c646aa820e70\") " pod="openshift-marketplace/redhat-marketplace-hrdrz" Dec 12 17:13:49 crc kubenswrapper[4693]: I1212 17:13:49.517862 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgdhf\" (UniqueName: \"kubernetes.io/projected/b132dd51-eb8c-4214-81e5-c646aa820e70-kube-api-access-rgdhf\") pod \"redhat-marketplace-hrdrz\" (UID: \"b132dd51-eb8c-4214-81e5-c646aa820e70\") " pod="openshift-marketplace/redhat-marketplace-hrdrz" Dec 12 17:13:49 crc kubenswrapper[4693]: I1212 17:13:49.518697 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b132dd51-eb8c-4214-81e5-c646aa820e70-catalog-content\") pod \"redhat-marketplace-hrdrz\" (UID: \"b132dd51-eb8c-4214-81e5-c646aa820e70\") " pod="openshift-marketplace/redhat-marketplace-hrdrz" Dec 12 17:13:49 crc kubenswrapper[4693]: I1212 17:13:49.621099 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b132dd51-eb8c-4214-81e5-c646aa820e70-utilities\") pod \"redhat-marketplace-hrdrz\" (UID: \"b132dd51-eb8c-4214-81e5-c646aa820e70\") " pod="openshift-marketplace/redhat-marketplace-hrdrz" Dec 12 17:13:49 crc kubenswrapper[4693]: I1212 17:13:49.621220 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgdhf\" (UniqueName: \"kubernetes.io/projected/b132dd51-eb8c-4214-81e5-c646aa820e70-kube-api-access-rgdhf\") pod \"redhat-marketplace-hrdrz\" (UID: \"b132dd51-eb8c-4214-81e5-c646aa820e70\") " pod="openshift-marketplace/redhat-marketplace-hrdrz" Dec 12 17:13:49 crc kubenswrapper[4693]: I1212 17:13:49.621518 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b132dd51-eb8c-4214-81e5-c646aa820e70-catalog-content\") pod \"redhat-marketplace-hrdrz\" (UID: \"b132dd51-eb8c-4214-81e5-c646aa820e70\") " pod="openshift-marketplace/redhat-marketplace-hrdrz" Dec 12 17:13:49 crc kubenswrapper[4693]: I1212 17:13:49.621650 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b132dd51-eb8c-4214-81e5-c646aa820e70-utilities\") pod \"redhat-marketplace-hrdrz\" (UID: \"b132dd51-eb8c-4214-81e5-c646aa820e70\") " pod="openshift-marketplace/redhat-marketplace-hrdrz" Dec 12 17:13:49 crc kubenswrapper[4693]: I1212 17:13:49.622108 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b132dd51-eb8c-4214-81e5-c646aa820e70-catalog-content\") pod \"redhat-marketplace-hrdrz\" (UID: \"b132dd51-eb8c-4214-81e5-c646aa820e70\") " pod="openshift-marketplace/redhat-marketplace-hrdrz" Dec 12 17:13:49 crc kubenswrapper[4693]: I1212 17:13:49.648594 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgdhf\" (UniqueName: \"kubernetes.io/projected/b132dd51-eb8c-4214-81e5-c646aa820e70-kube-api-access-rgdhf\") pod \"redhat-marketplace-hrdrz\" (UID: \"b132dd51-eb8c-4214-81e5-c646aa820e70\") " pod="openshift-marketplace/redhat-marketplace-hrdrz" Dec 12 17:13:49 crc kubenswrapper[4693]: I1212 17:13:49.675917 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrdrz" Dec 12 17:13:50 crc kubenswrapper[4693]: W1212 17:13:50.255067 4693 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb132dd51_eb8c_4214_81e5_c646aa820e70.slice/crio-57b549d8d1851110628194a4671f3b35bb45ec4df25071fe8ace82d8b502143e WatchSource:0}: Error finding container 57b549d8d1851110628194a4671f3b35bb45ec4df25071fe8ace82d8b502143e: Status 404 returned error can't find the container with id 57b549d8d1851110628194a4671f3b35bb45ec4df25071fe8ace82d8b502143e Dec 12 17:13:50 crc kubenswrapper[4693]: I1212 17:13:50.264831 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrdrz"] Dec 12 17:13:51 crc kubenswrapper[4693]: I1212 17:13:51.039720 4693 generic.go:334] "Generic (PLEG): container finished" podID="b132dd51-eb8c-4214-81e5-c646aa820e70" containerID="9fa02131b543d7996ab61cace3a07121ab72fa7b7f05c2e9529b71e71891d853" exitCode=0 Dec 12 17:13:51 crc kubenswrapper[4693]: I1212 17:13:51.039804 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrdrz" event={"ID":"b132dd51-eb8c-4214-81e5-c646aa820e70","Type":"ContainerDied","Data":"9fa02131b543d7996ab61cace3a07121ab72fa7b7f05c2e9529b71e71891d853"} Dec 12 17:13:51 crc kubenswrapper[4693]: I1212 17:13:51.040035 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrdrz" event={"ID":"b132dd51-eb8c-4214-81e5-c646aa820e70","Type":"ContainerStarted","Data":"57b549d8d1851110628194a4671f3b35bb45ec4df25071fe8ace82d8b502143e"} Dec 12 17:13:53 crc kubenswrapper[4693]: I1212 17:13:53.063703 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrdrz" event={"ID":"b132dd51-eb8c-4214-81e5-c646aa820e70","Type":"ContainerStarted","Data":"66a14a40f0da1696216851cc8e57a9b09f5eea2b941615b7a2072fd20614f557"} Dec 12 17:13:54 crc kubenswrapper[4693]: I1212 17:13:54.076012 4693 generic.go:334] "Generic (PLEG): container finished" podID="b132dd51-eb8c-4214-81e5-c646aa820e70" containerID="66a14a40f0da1696216851cc8e57a9b09f5eea2b941615b7a2072fd20614f557" exitCode=0 Dec 12 17:13:54 crc kubenswrapper[4693]: I1212 17:13:54.076055 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrdrz" event={"ID":"b132dd51-eb8c-4214-81e5-c646aa820e70","Type":"ContainerDied","Data":"66a14a40f0da1696216851cc8e57a9b09f5eea2b941615b7a2072fd20614f557"} Dec 12 17:13:55 crc kubenswrapper[4693]: I1212 17:13:55.093525 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrdrz" event={"ID":"b132dd51-eb8c-4214-81e5-c646aa820e70","Type":"ContainerStarted","Data":"d5b536f013a018ca9e87a4bd6f58d21826e958dc3ba9e1a2e411ba2b028a3a32"} Dec 12 17:13:55 crc kubenswrapper[4693]: I1212 17:13:55.128944 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hrdrz" podStartSLOduration=2.677088231 podStartE2EDuration="6.128918977s" podCreationTimestamp="2025-12-12 17:13:49 +0000 UTC" firstStartedPulling="2025-12-12 17:13:51.041787281 +0000 UTC m=+5258.210426882" lastFinishedPulling="2025-12-12 17:13:54.493618027 +0000 UTC m=+5261.662257628" observedRunningTime="2025-12-12 17:13:55.122317481 +0000 UTC m=+5262.290957102" watchObservedRunningTime="2025-12-12 17:13:55.128918977 +0000 UTC m=+5262.297558618" Dec 12 17:13:59 crc kubenswrapper[4693]: I1212 17:13:59.676644 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hrdrz" Dec 12 17:13:59 crc kubenswrapper[4693]: I1212 17:13:59.677113 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hrdrz" Dec 12 17:13:59 crc kubenswrapper[4693]: I1212 17:13:59.745697 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hrdrz" Dec 12 17:14:00 crc kubenswrapper[4693]: I1212 17:14:00.265077 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hrdrz" Dec 12 17:14:00 crc kubenswrapper[4693]: I1212 17:14:00.343337 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrdrz"] Dec 12 17:14:00 crc kubenswrapper[4693]: I1212 17:14:00.358054 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:14:00 crc kubenswrapper[4693]: E1212 17:14:00.358341 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:14:02 crc kubenswrapper[4693]: I1212 17:14:02.178290 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hrdrz" podUID="b132dd51-eb8c-4214-81e5-c646aa820e70" containerName="registry-server" containerID="cri-o://d5b536f013a018ca9e87a4bd6f58d21826e958dc3ba9e1a2e411ba2b028a3a32" gracePeriod=2 Dec 12 17:14:02 crc kubenswrapper[4693]: I1212 17:14:02.808469 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrdrz" Dec 12 17:14:02 crc kubenswrapper[4693]: I1212 17:14:02.986802 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b132dd51-eb8c-4214-81e5-c646aa820e70-catalog-content\") pod \"b132dd51-eb8c-4214-81e5-c646aa820e70\" (UID: \"b132dd51-eb8c-4214-81e5-c646aa820e70\") " Dec 12 17:14:02 crc kubenswrapper[4693]: I1212 17:14:02.986853 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgdhf\" (UniqueName: \"kubernetes.io/projected/b132dd51-eb8c-4214-81e5-c646aa820e70-kube-api-access-rgdhf\") pod \"b132dd51-eb8c-4214-81e5-c646aa820e70\" (UID: \"b132dd51-eb8c-4214-81e5-c646aa820e70\") " Dec 12 17:14:02 crc kubenswrapper[4693]: I1212 17:14:02.986944 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b132dd51-eb8c-4214-81e5-c646aa820e70-utilities\") pod \"b132dd51-eb8c-4214-81e5-c646aa820e70\" (UID: \"b132dd51-eb8c-4214-81e5-c646aa820e70\") " Dec 12 17:14:02 crc kubenswrapper[4693]: I1212 17:14:02.987927 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b132dd51-eb8c-4214-81e5-c646aa820e70-utilities" (OuterVolumeSpecName: "utilities") pod "b132dd51-eb8c-4214-81e5-c646aa820e70" (UID: "b132dd51-eb8c-4214-81e5-c646aa820e70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 17:14:02 crc kubenswrapper[4693]: I1212 17:14:02.998800 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b132dd51-eb8c-4214-81e5-c646aa820e70-kube-api-access-rgdhf" (OuterVolumeSpecName: "kube-api-access-rgdhf") pod "b132dd51-eb8c-4214-81e5-c646aa820e70" (UID: "b132dd51-eb8c-4214-81e5-c646aa820e70"). InnerVolumeSpecName "kube-api-access-rgdhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 17:14:03 crc kubenswrapper[4693]: I1212 17:14:03.011414 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b132dd51-eb8c-4214-81e5-c646aa820e70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b132dd51-eb8c-4214-81e5-c646aa820e70" (UID: "b132dd51-eb8c-4214-81e5-c646aa820e70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 17:14:03 crc kubenswrapper[4693]: I1212 17:14:03.090331 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b132dd51-eb8c-4214-81e5-c646aa820e70-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 17:14:03 crc kubenswrapper[4693]: I1212 17:14:03.090449 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgdhf\" (UniqueName: \"kubernetes.io/projected/b132dd51-eb8c-4214-81e5-c646aa820e70-kube-api-access-rgdhf\") on node \"crc\" DevicePath \"\"" Dec 12 17:14:03 crc kubenswrapper[4693]: I1212 17:14:03.090476 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b132dd51-eb8c-4214-81e5-c646aa820e70-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 17:14:03 crc kubenswrapper[4693]: I1212 17:14:03.212088 4693 generic.go:334] "Generic (PLEG): container finished" podID="b132dd51-eb8c-4214-81e5-c646aa820e70" containerID="d5b536f013a018ca9e87a4bd6f58d21826e958dc3ba9e1a2e411ba2b028a3a32" exitCode=0 Dec 12 17:14:03 crc kubenswrapper[4693]: I1212 17:14:03.212161 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrdrz" event={"ID":"b132dd51-eb8c-4214-81e5-c646aa820e70","Type":"ContainerDied","Data":"d5b536f013a018ca9e87a4bd6f58d21826e958dc3ba9e1a2e411ba2b028a3a32"} Dec 12 17:14:03 crc kubenswrapper[4693]: I1212 17:14:03.212201 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrdrz" event={"ID":"b132dd51-eb8c-4214-81e5-c646aa820e70","Type":"ContainerDied","Data":"57b549d8d1851110628194a4671f3b35bb45ec4df25071fe8ace82d8b502143e"} Dec 12 17:14:03 crc kubenswrapper[4693]: I1212 17:14:03.212249 4693 scope.go:117] "RemoveContainer" containerID="d5b536f013a018ca9e87a4bd6f58d21826e958dc3ba9e1a2e411ba2b028a3a32" Dec 12 17:14:03 crc kubenswrapper[4693]: I1212 17:14:03.212646 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrdrz" Dec 12 17:14:03 crc kubenswrapper[4693]: I1212 17:14:03.278011 4693 scope.go:117] "RemoveContainer" containerID="66a14a40f0da1696216851cc8e57a9b09f5eea2b941615b7a2072fd20614f557" Dec 12 17:14:03 crc kubenswrapper[4693]: I1212 17:14:03.301825 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrdrz"] Dec 12 17:14:03 crc kubenswrapper[4693]: I1212 17:14:03.304162 4693 scope.go:117] "RemoveContainer" containerID="9fa02131b543d7996ab61cace3a07121ab72fa7b7f05c2e9529b71e71891d853" Dec 12 17:14:03 crc kubenswrapper[4693]: I1212 17:14:03.319940 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrdrz"] Dec 12 17:14:03 crc kubenswrapper[4693]: I1212 17:14:03.376538 4693 scope.go:117] "RemoveContainer" containerID="d5b536f013a018ca9e87a4bd6f58d21826e958dc3ba9e1a2e411ba2b028a3a32" Dec 12 17:14:03 crc kubenswrapper[4693]: E1212 17:14:03.378066 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5b536f013a018ca9e87a4bd6f58d21826e958dc3ba9e1a2e411ba2b028a3a32\": container with ID starting with d5b536f013a018ca9e87a4bd6f58d21826e958dc3ba9e1a2e411ba2b028a3a32 not found: ID does not exist" containerID="d5b536f013a018ca9e87a4bd6f58d21826e958dc3ba9e1a2e411ba2b028a3a32" Dec 12 17:14:03 crc kubenswrapper[4693]: I1212 17:14:03.378102 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5b536f013a018ca9e87a4bd6f58d21826e958dc3ba9e1a2e411ba2b028a3a32"} err="failed to get container status \"d5b536f013a018ca9e87a4bd6f58d21826e958dc3ba9e1a2e411ba2b028a3a32\": rpc error: code = NotFound desc = could not find container \"d5b536f013a018ca9e87a4bd6f58d21826e958dc3ba9e1a2e411ba2b028a3a32\": container with ID starting with d5b536f013a018ca9e87a4bd6f58d21826e958dc3ba9e1a2e411ba2b028a3a32 not found: ID does not exist" Dec 12 17:14:03 crc kubenswrapper[4693]: I1212 17:14:03.378124 4693 scope.go:117] "RemoveContainer" containerID="66a14a40f0da1696216851cc8e57a9b09f5eea2b941615b7a2072fd20614f557" Dec 12 17:14:03 crc kubenswrapper[4693]: E1212 17:14:03.378415 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66a14a40f0da1696216851cc8e57a9b09f5eea2b941615b7a2072fd20614f557\": container with ID starting with 66a14a40f0da1696216851cc8e57a9b09f5eea2b941615b7a2072fd20614f557 not found: ID does not exist" containerID="66a14a40f0da1696216851cc8e57a9b09f5eea2b941615b7a2072fd20614f557" Dec 12 17:14:03 crc kubenswrapper[4693]: I1212 17:14:03.378444 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66a14a40f0da1696216851cc8e57a9b09f5eea2b941615b7a2072fd20614f557"} err="failed to get container status \"66a14a40f0da1696216851cc8e57a9b09f5eea2b941615b7a2072fd20614f557\": rpc error: code = NotFound desc = could not find container \"66a14a40f0da1696216851cc8e57a9b09f5eea2b941615b7a2072fd20614f557\": container with ID starting with 66a14a40f0da1696216851cc8e57a9b09f5eea2b941615b7a2072fd20614f557 not found: ID does not exist" Dec 12 17:14:03 crc kubenswrapper[4693]: I1212 17:14:03.378459 4693 scope.go:117] "RemoveContainer" containerID="9fa02131b543d7996ab61cace3a07121ab72fa7b7f05c2e9529b71e71891d853" Dec 12 17:14:03 crc kubenswrapper[4693]: E1212 17:14:03.378822 4693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fa02131b543d7996ab61cace3a07121ab72fa7b7f05c2e9529b71e71891d853\": container with ID starting with 9fa02131b543d7996ab61cace3a07121ab72fa7b7f05c2e9529b71e71891d853 not found: ID does not exist" containerID="9fa02131b543d7996ab61cace3a07121ab72fa7b7f05c2e9529b71e71891d853" Dec 12 17:14:03 crc kubenswrapper[4693]: I1212 17:14:03.378870 4693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fa02131b543d7996ab61cace3a07121ab72fa7b7f05c2e9529b71e71891d853"} err="failed to get container status \"9fa02131b543d7996ab61cace3a07121ab72fa7b7f05c2e9529b71e71891d853\": rpc error: code = NotFound desc = could not find container \"9fa02131b543d7996ab61cace3a07121ab72fa7b7f05c2e9529b71e71891d853\": container with ID starting with 9fa02131b543d7996ab61cace3a07121ab72fa7b7f05c2e9529b71e71891d853 not found: ID does not exist" Dec 12 17:14:03 crc kubenswrapper[4693]: I1212 17:14:03.382426 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b132dd51-eb8c-4214-81e5-c646aa820e70" path="/var/lib/kubelet/pods/b132dd51-eb8c-4214-81e5-c646aa820e70/volumes" Dec 12 17:14:05 crc kubenswrapper[4693]: I1212 17:14:05.606229 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hsxzq"] Dec 12 17:14:05 crc kubenswrapper[4693]: E1212 17:14:05.607480 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b132dd51-eb8c-4214-81e5-c646aa820e70" containerName="extract-content" Dec 12 17:14:05 crc kubenswrapper[4693]: I1212 17:14:05.607494 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="b132dd51-eb8c-4214-81e5-c646aa820e70" containerName="extract-content" Dec 12 17:14:05 crc kubenswrapper[4693]: E1212 17:14:05.607521 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b132dd51-eb8c-4214-81e5-c646aa820e70" containerName="extract-utilities" Dec 12 17:14:05 crc kubenswrapper[4693]: I1212 17:14:05.607527 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="b132dd51-eb8c-4214-81e5-c646aa820e70" containerName="extract-utilities" Dec 12 17:14:05 crc kubenswrapper[4693]: E1212 17:14:05.607554 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b132dd51-eb8c-4214-81e5-c646aa820e70" containerName="registry-server" Dec 12 17:14:05 crc kubenswrapper[4693]: I1212 17:14:05.607562 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="b132dd51-eb8c-4214-81e5-c646aa820e70" containerName="registry-server" Dec 12 17:14:05 crc kubenswrapper[4693]: I1212 17:14:05.607813 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="b132dd51-eb8c-4214-81e5-c646aa820e70" containerName="registry-server" Dec 12 17:14:05 crc kubenswrapper[4693]: I1212 17:14:05.609841 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hsxzq" Dec 12 17:14:05 crc kubenswrapper[4693]: I1212 17:14:05.621187 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hsxzq"] Dec 12 17:14:05 crc kubenswrapper[4693]: I1212 17:14:05.760635 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fa56db-49e1-4138-b485-aaaa14e7ebdd-catalog-content\") pod \"community-operators-hsxzq\" (UID: \"f0fa56db-49e1-4138-b485-aaaa14e7ebdd\") " pod="openshift-marketplace/community-operators-hsxzq" Dec 12 17:14:05 crc kubenswrapper[4693]: I1212 17:14:05.760675 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bklrz\" (UniqueName: \"kubernetes.io/projected/f0fa56db-49e1-4138-b485-aaaa14e7ebdd-kube-api-access-bklrz\") pod \"community-operators-hsxzq\" (UID: \"f0fa56db-49e1-4138-b485-aaaa14e7ebdd\") " pod="openshift-marketplace/community-operators-hsxzq" Dec 12 17:14:05 crc kubenswrapper[4693]: I1212 17:14:05.760832 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fa56db-49e1-4138-b485-aaaa14e7ebdd-utilities\") pod \"community-operators-hsxzq\" (UID: \"f0fa56db-49e1-4138-b485-aaaa14e7ebdd\") " pod="openshift-marketplace/community-operators-hsxzq" Dec 12 17:14:05 crc kubenswrapper[4693]: I1212 17:14:05.865724 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fa56db-49e1-4138-b485-aaaa14e7ebdd-utilities\") pod \"community-operators-hsxzq\" (UID: \"f0fa56db-49e1-4138-b485-aaaa14e7ebdd\") " pod="openshift-marketplace/community-operators-hsxzq" Dec 12 17:14:05 crc kubenswrapper[4693]: I1212 17:14:05.866064 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fa56db-49e1-4138-b485-aaaa14e7ebdd-catalog-content\") pod \"community-operators-hsxzq\" (UID: \"f0fa56db-49e1-4138-b485-aaaa14e7ebdd\") " pod="openshift-marketplace/community-operators-hsxzq" Dec 12 17:14:05 crc kubenswrapper[4693]: I1212 17:14:05.866101 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bklrz\" (UniqueName: \"kubernetes.io/projected/f0fa56db-49e1-4138-b485-aaaa14e7ebdd-kube-api-access-bklrz\") pod \"community-operators-hsxzq\" (UID: \"f0fa56db-49e1-4138-b485-aaaa14e7ebdd\") " pod="openshift-marketplace/community-operators-hsxzq" Dec 12 17:14:05 crc kubenswrapper[4693]: I1212 17:14:05.867038 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fa56db-49e1-4138-b485-aaaa14e7ebdd-utilities\") pod \"community-operators-hsxzq\" (UID: \"f0fa56db-49e1-4138-b485-aaaa14e7ebdd\") " pod="openshift-marketplace/community-operators-hsxzq" Dec 12 17:14:05 crc kubenswrapper[4693]: I1212 17:14:05.867310 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fa56db-49e1-4138-b485-aaaa14e7ebdd-catalog-content\") pod \"community-operators-hsxzq\" (UID: \"f0fa56db-49e1-4138-b485-aaaa14e7ebdd\") " pod="openshift-marketplace/community-operators-hsxzq" Dec 12 17:14:05 crc kubenswrapper[4693]: I1212 17:14:05.895198 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bklrz\" (UniqueName: \"kubernetes.io/projected/f0fa56db-49e1-4138-b485-aaaa14e7ebdd-kube-api-access-bklrz\") pod \"community-operators-hsxzq\" (UID: \"f0fa56db-49e1-4138-b485-aaaa14e7ebdd\") " pod="openshift-marketplace/community-operators-hsxzq" Dec 12 17:14:05 crc kubenswrapper[4693]: I1212 17:14:05.931586 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hsxzq" Dec 12 17:14:06 crc kubenswrapper[4693]: I1212 17:14:06.781269 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hsxzq"] Dec 12 17:14:07 crc kubenswrapper[4693]: I1212 17:14:07.299390 4693 generic.go:334] "Generic (PLEG): container finished" podID="f0fa56db-49e1-4138-b485-aaaa14e7ebdd" containerID="f9dfc48197a06260c647523bcf23170cc494367c78d1e7855d6b3714c47db160" exitCode=0 Dec 12 17:14:07 crc kubenswrapper[4693]: I1212 17:14:07.299503 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hsxzq" event={"ID":"f0fa56db-49e1-4138-b485-aaaa14e7ebdd","Type":"ContainerDied","Data":"f9dfc48197a06260c647523bcf23170cc494367c78d1e7855d6b3714c47db160"} Dec 12 17:14:07 crc kubenswrapper[4693]: I1212 17:14:07.299688 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hsxzq" event={"ID":"f0fa56db-49e1-4138-b485-aaaa14e7ebdd","Type":"ContainerStarted","Data":"8643fa0676517f5102e1013d6ab5754ea711a74fb1d14c4363c1978c6048a800"} Dec 12 17:14:09 crc kubenswrapper[4693]: I1212 17:14:09.325733 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hsxzq" event={"ID":"f0fa56db-49e1-4138-b485-aaaa14e7ebdd","Type":"ContainerStarted","Data":"41f425ee5f0dbaf6baf70936e92a7189cc2fa68675ff6b82e72db2b8c04d3feb"} Dec 12 17:14:10 crc kubenswrapper[4693]: I1212 17:14:10.338841 4693 generic.go:334] "Generic (PLEG): container finished" podID="f0fa56db-49e1-4138-b485-aaaa14e7ebdd" containerID="41f425ee5f0dbaf6baf70936e92a7189cc2fa68675ff6b82e72db2b8c04d3feb" exitCode=0 Dec 12 17:14:10 crc kubenswrapper[4693]: I1212 17:14:10.338936 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hsxzq" event={"ID":"f0fa56db-49e1-4138-b485-aaaa14e7ebdd","Type":"ContainerDied","Data":"41f425ee5f0dbaf6baf70936e92a7189cc2fa68675ff6b82e72db2b8c04d3feb"} Dec 12 17:14:12 crc kubenswrapper[4693]: I1212 17:14:12.359777 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hsxzq" event={"ID":"f0fa56db-49e1-4138-b485-aaaa14e7ebdd","Type":"ContainerStarted","Data":"0c534982692b2a2a44f449c231e110caf743450f170d82e3eb5eacb8bfa04521"} Dec 12 17:14:12 crc kubenswrapper[4693]: I1212 17:14:12.401040 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hsxzq" podStartSLOduration=3.891556745 podStartE2EDuration="7.401013907s" podCreationTimestamp="2025-12-12 17:14:05 +0000 UTC" firstStartedPulling="2025-12-12 17:14:07.301185942 +0000 UTC m=+5274.469825543" lastFinishedPulling="2025-12-12 17:14:10.810643094 +0000 UTC m=+5277.979282705" observedRunningTime="2025-12-12 17:14:12.388816332 +0000 UTC m=+5279.557455933" watchObservedRunningTime="2025-12-12 17:14:12.401013907 +0000 UTC m=+5279.569653528" Dec 12 17:14:14 crc kubenswrapper[4693]: I1212 17:14:14.357657 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:14:14 crc kubenswrapper[4693]: E1212 17:14:14.358152 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:14:15 crc kubenswrapper[4693]: I1212 17:14:15.933233 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hsxzq" Dec 12 17:14:15 crc kubenswrapper[4693]: I1212 17:14:15.933643 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hsxzq" Dec 12 17:14:17 crc kubenswrapper[4693]: I1212 17:14:17.001967 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hsxzq" podUID="f0fa56db-49e1-4138-b485-aaaa14e7ebdd" containerName="registry-server" probeResult="failure" output=< Dec 12 17:14:17 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:14:17 crc kubenswrapper[4693]: > Dec 12 17:14:26 crc kubenswrapper[4693]: I1212 17:14:26.011653 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hsxzq" Dec 12 17:14:26 crc kubenswrapper[4693]: I1212 17:14:26.120972 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hsxzq" Dec 12 17:14:26 crc kubenswrapper[4693]: I1212 17:14:26.413306 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hsxzq"] Dec 12 17:14:27 crc kubenswrapper[4693]: I1212 17:14:27.359250 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:14:27 crc kubenswrapper[4693]: E1212 17:14:27.360372 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:14:27 crc kubenswrapper[4693]: I1212 17:14:27.558537 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hsxzq" podUID="f0fa56db-49e1-4138-b485-aaaa14e7ebdd" containerName="registry-server" containerID="cri-o://0c534982692b2a2a44f449c231e110caf743450f170d82e3eb5eacb8bfa04521" gracePeriod=2 Dec 12 17:14:28 crc kubenswrapper[4693]: I1212 17:14:28.380518 4693 patch_prober.go:28] interesting pod/metrics-server-66fcfb545d-whswt container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:28 crc kubenswrapper[4693]: I1212 17:14:28.380518 4693 patch_prober.go:28] interesting pod/metrics-server-66fcfb545d-whswt container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:28 crc kubenswrapper[4693]: I1212 17:14:28.382030 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" podUID="cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:28 crc kubenswrapper[4693]: I1212 17:14:28.382035 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" podUID="cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:28 crc kubenswrapper[4693]: I1212 17:14:28.571208 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hsxzq" event={"ID":"f0fa56db-49e1-4138-b485-aaaa14e7ebdd","Type":"ContainerDied","Data":"0c534982692b2a2a44f449c231e110caf743450f170d82e3eb5eacb8bfa04521"} Dec 12 17:14:28 crc kubenswrapper[4693]: I1212 17:14:28.571231 4693 generic.go:334] "Generic (PLEG): container finished" podID="f0fa56db-49e1-4138-b485-aaaa14e7ebdd" containerID="0c534982692b2a2a44f449c231e110caf743450f170d82e3eb5eacb8bfa04521" exitCode=0 Dec 12 17:14:28 crc kubenswrapper[4693]: I1212 17:14:28.739743 4693 patch_prober.go:28] interesting pod/monitoring-plugin-78f748b45-xcpg8 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:28 crc kubenswrapper[4693]: I1212 17:14:28.739796 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-78f748b45-xcpg8" podUID="f267d07b-2357-45ee-999d-94fad4f7bbce" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.79:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:29 crc kubenswrapper[4693]: I1212 17:14:29.136525 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-7qqwq" podUID="f24e6c5f-a222-44d6-8c2a-75b0d066e218" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:29 crc kubenswrapper[4693]: I1212 17:14:29.789008 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="73559c8b-d017-4a5d-aced-3da25d264b0a" containerName="galera" probeResult="failure" output="command timed out" Dec 12 17:14:29 crc kubenswrapper[4693]: I1212 17:14:29.789031 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="215ca5de-ce9f-4370-8aff-715dd1e384a3" containerName="ovn-northd" probeResult="failure" output="command timed out" Dec 12 17:14:29 crc kubenswrapper[4693]: I1212 17:14:29.789030 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="215ca5de-ce9f-4370-8aff-715dd1e384a3" containerName="ovn-northd" probeResult="failure" output="command timed out" Dec 12 17:14:29 crc kubenswrapper[4693]: I1212 17:14:29.794127 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="73559c8b-d017-4a5d-aced-3da25d264b0a" containerName="galera" probeResult="failure" output="command timed out" Dec 12 17:14:29 crc kubenswrapper[4693]: I1212 17:14:29.904865 4693 trace.go:236] Trace[1231472797]: "Calculate volume metrics of swift for pod openstack/swift-storage-0" (12-Dec-2025 17:14:27.764) (total time: 2139ms): Dec 12 17:14:29 crc kubenswrapper[4693]: Trace[1231472797]: [2.139552709s] [2.139552709s] END Dec 12 17:14:31 crc kubenswrapper[4693]: I1212 17:14:31.788691 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="ec693a73-a415-42a1-98f4-86438aa58d56" containerName="galera" probeResult="failure" output="command timed out" Dec 12 17:14:31 crc kubenswrapper[4693]: I1212 17:14:31.789534 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="ec693a73-a415-42a1-98f4-86438aa58d56" containerName="galera" probeResult="failure" output="command timed out" Dec 12 17:14:31 crc kubenswrapper[4693]: I1212 17:14:31.916654 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-jstfj container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:31 crc kubenswrapper[4693]: I1212 17:14:31.916746 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" podUID="752b64e1-40d2-47cd-a555-0e23495e2443" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:31 crc kubenswrapper[4693]: I1212 17:14:31.916831 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-jstfj container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:31 crc kubenswrapper[4693]: I1212 17:14:31.916860 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" podUID="752b64e1-40d2-47cd-a555-0e23495e2443" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:31 crc kubenswrapper[4693]: I1212 17:14:31.937461 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-6fjrq container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:31 crc kubenswrapper[4693]: I1212 17:14:31.937535 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" podUID="34840dce-2cd9-4cf3-81cc-be2fb6e08993" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:31 crc kubenswrapper[4693]: I1212 17:14:31.937636 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-6fjrq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:31 crc kubenswrapper[4693]: I1212 17:14:31.937659 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" podUID="34840dce-2cd9-4cf3-81cc-be2fb6e08993" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:32 crc kubenswrapper[4693]: I1212 17:14:32.797553 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Dec 12 17:14:32 crc kubenswrapper[4693]: I1212 17:14:32.965955 4693 patch_prober.go:28] interesting pod/controller-manager-6964955f74-9kcjr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:32 crc kubenswrapper[4693]: I1212 17:14:32.966051 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" podUID="4741216c-0a8d-4079-b459-cb459dc4f5b3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:32 crc kubenswrapper[4693]: I1212 17:14:32.965961 4693 patch_prober.go:28] interesting pod/controller-manager-6964955f74-9kcjr container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:32 crc kubenswrapper[4693]: I1212 17:14:32.966537 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" podUID="4741216c-0a8d-4079-b459-cb459dc4f5b3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:33 crc kubenswrapper[4693]: I1212 17:14:33.503853 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9jcss" podUID="a41df83d-6bb2-4c49-a431-f5851036a44d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:33 crc kubenswrapper[4693]: I1212 17:14:33.579699 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d" podUID="96adb3fc-0bd6-44a0-9a3a-3bae3aa3a30c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:33 crc kubenswrapper[4693]: I1212 17:14:33.683488 4693 patch_prober.go:28] interesting pod/route-controller-manager-66fdbf566b-4w29d container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:33 crc kubenswrapper[4693]: I1212 17:14:33.683542 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" podUID="41b94683-51bf-4720-9160-36bd373d88ba" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:33 crc kubenswrapper[4693]: I1212 17:14:33.683503 4693 patch_prober.go:28] interesting pod/route-controller-manager-66fdbf566b-4w29d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:33 crc kubenswrapper[4693]: I1212 17:14:33.683995 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" podUID="41b94683-51bf-4720-9160-36bd373d88ba" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:33 crc kubenswrapper[4693]: I1212 17:14:33.827511 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" podUID="5616685d-71d7-49b9-8c1b-6eccc11a74a1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:33 crc kubenswrapper[4693]: I1212 17:14:33.880520 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf" podUID="26537316-7b55-48dc-b952-bc2220120194" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:34 crc kubenswrapper[4693]: I1212 17:14:34.523128 4693 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-zwt44 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:34 crc kubenswrapper[4693]: I1212 17:14:34.523492 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" podUID="cae4711d-6ae1-402f-9fc8-751998ed785d" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:34 crc kubenswrapper[4693]: I1212 17:14:34.523135 4693 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-zwt44 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:34 crc kubenswrapper[4693]: I1212 17:14:34.523580 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" podUID="cae4711d-6ae1-402f-9fc8-751998ed785d" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:34 crc kubenswrapper[4693]: I1212 17:14:34.789740 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="215ca5de-ce9f-4370-8aff-715dd1e384a3" containerName="ovn-northd" probeResult="failure" output="command timed out" Dec 12 17:14:34 crc kubenswrapper[4693]: I1212 17:14:34.789770 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="215ca5de-ce9f-4370-8aff-715dd1e384a3" containerName="ovn-northd" probeResult="failure" output="command timed out" Dec 12 17:14:35 crc kubenswrapper[4693]: I1212 17:14:35.557791 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" podUID="51d89b29-7872-4e9d-9fdd-b1fdd7de6de3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:35 crc kubenswrapper[4693]: I1212 17:14:35.557791 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" podUID="51d89b29-7872-4e9d-9fdd-b1fdd7de6de3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:35 crc kubenswrapper[4693]: I1212 17:14:35.831513 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" podUID="ffd9365b-fb4c-4a2b-a168-fcf9cce89228" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:35 crc kubenswrapper[4693]: I1212 17:14:35.831570 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" podUID="ffd9365b-fb4c-4a2b-a168-fcf9cce89228" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:35 crc kubenswrapper[4693]: E1212 17:14:35.935967 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0c534982692b2a2a44f449c231e110caf743450f170d82e3eb5eacb8bfa04521 is running failed: container process not found" containerID="0c534982692b2a2a44f449c231e110caf743450f170d82e3eb5eacb8bfa04521" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 17:14:35 crc kubenswrapper[4693]: E1212 17:14:35.936489 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0c534982692b2a2a44f449c231e110caf743450f170d82e3eb5eacb8bfa04521 is running failed: container process not found" containerID="0c534982692b2a2a44f449c231e110caf743450f170d82e3eb5eacb8bfa04521" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 17:14:35 crc kubenswrapper[4693]: E1212 17:14:35.936935 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0c534982692b2a2a44f449c231e110caf743450f170d82e3eb5eacb8bfa04521 is running failed: container process not found" containerID="0c534982692b2a2a44f449c231e110caf743450f170d82e3eb5eacb8bfa04521" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 17:14:35 crc kubenswrapper[4693]: E1212 17:14:35.937007 4693 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0c534982692b2a2a44f449c231e110caf743450f170d82e3eb5eacb8bfa04521 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-hsxzq" podUID="f0fa56db-49e1-4138-b485-aaaa14e7ebdd" containerName="registry-server" Dec 12 17:14:36 crc kubenswrapper[4693]: I1212 17:14:36.217557 4693 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-w6x8t container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:36 crc kubenswrapper[4693]: I1212 17:14:36.217614 4693 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-w6x8t container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:36 crc kubenswrapper[4693]: I1212 17:14:36.217648 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" podUID="62fa5de9-a571-40e5-a32c-e1708a428f19" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:36 crc kubenswrapper[4693]: I1212 17:14:36.217683 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" podUID="62fa5de9-a571-40e5-a32c-e1708a428f19" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:36 crc kubenswrapper[4693]: I1212 17:14:36.217753 4693 patch_prober.go:28] interesting pod/logging-loki-distributor-76cc67bf56-fsnzk container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:36 crc kubenswrapper[4693]: I1212 17:14:36.217789 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" podUID="fee116a3-18ff-4755-b34f-82baa25eeefd" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:36 crc kubenswrapper[4693]: I1212 17:14:36.351848 4693 patch_prober.go:28] interesting pod/logging-loki-querier-5895d59bb8-gcwwx container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:36 crc kubenswrapper[4693]: I1212 17:14:36.351937 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" podUID="b89219b7-2b92-44d8-897c-beb5ef9d6861" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:36 crc kubenswrapper[4693]: I1212 17:14:36.457937 4693 patch_prober.go:28] interesting pod/logging-loki-query-frontend-84558f7c9f-ht5fw container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:36 crc kubenswrapper[4693]: I1212 17:14:36.457997 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" podUID="2691b993-4939-4cf2-84ab-1d34ea3dded9" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:36 crc kubenswrapper[4693]: I1212 17:14:36.916774 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-jstfj container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:36 crc kubenswrapper[4693]: I1212 17:14:36.917195 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" podUID="752b64e1-40d2-47cd-a555-0e23495e2443" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:36 crc kubenswrapper[4693]: I1212 17:14:36.916901 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-jstfj container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:36 crc kubenswrapper[4693]: I1212 17:14:36.917320 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" podUID="752b64e1-40d2-47cd-a555-0e23495e2443" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:36 crc kubenswrapper[4693]: I1212 17:14:36.937459 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-6fjrq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:36 crc kubenswrapper[4693]: I1212 17:14:36.937544 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" podUID="34840dce-2cd9-4cf3-81cc-be2fb6e08993" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:36 crc kubenswrapper[4693]: I1212 17:14:36.937569 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-6fjrq container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:36 crc kubenswrapper[4693]: I1212 17:14:36.937611 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" podUID="34840dce-2cd9-4cf3-81cc-be2fb6e08993" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:37 crc kubenswrapper[4693]: I1212 17:14:37.640401 4693 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded" start-of-body= Dec 12 17:14:37 crc kubenswrapper[4693]: I1212 17:14:37.641100 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded" Dec 12 17:14:37 crc kubenswrapper[4693]: I1212 17:14:37.916001 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-jstfj container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.58:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:37 crc kubenswrapper[4693]: I1212 17:14:37.916676 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" podUID="26cbeab5-89ba-425a-87c0-8797ceb6957a" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:37 crc kubenswrapper[4693]: I1212 17:14:37.916676 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" podUID="26cbeab5-89ba-425a-87c0-8797ceb6957a" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:37 crc kubenswrapper[4693]: I1212 17:14:37.917113 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" podUID="752b64e1-40d2-47cd-a555-0e23495e2443" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:37 crc kubenswrapper[4693]: I1212 17:14:37.916694 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-jstfj container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.58:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:37 crc kubenswrapper[4693]: I1212 17:14:37.917379 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" podUID="752b64e1-40d2-47cd-a555-0e23495e2443" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:37 crc kubenswrapper[4693]: I1212 17:14:37.936897 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-6fjrq container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.57:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:37 crc kubenswrapper[4693]: I1212 17:14:37.936960 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-6fjrq container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.57:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:37 crc kubenswrapper[4693]: I1212 17:14:37.936971 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" podUID="34840dce-2cd9-4cf3-81cc-be2fb6e08993" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:37 crc kubenswrapper[4693]: I1212 17:14:37.936999 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" podUID="34840dce-2cd9-4cf3-81cc-be2fb6e08993" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.035602 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.035893 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.035603 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.035980 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.214164 4693 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-wz942 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.214673 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" podUID="87e8f397-20cd-469f-924d-204ce1a8db47" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.393210 4693 patch_prober.go:28] interesting pod/console-operator-58897d9998-wcl2w container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.393316 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" podUID="f51bf74b-1d86-4a22-a355-f2c64a6516e5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.393579 4693 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hhs2z container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.393615 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" podUID="38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.393657 4693 patch_prober.go:28] interesting pod/console-operator-58897d9998-wcl2w container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.393670 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" podUID="f51bf74b-1d86-4a22-a355-f2c64a6516e5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.393708 4693 patch_prober.go:28] interesting pod/metrics-server-66fcfb545d-whswt container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.393724 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" podUID="cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.393755 4693 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hhs2z container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.393768 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" podUID="38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.743636 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" podUID="57409d4d-edf7-400c-9fcf-d6116ac22968" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.784506 4693 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-d86dw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.784539 4693 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-d86dw container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.784565 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" podUID="61620225-2125-49da-94f6-f6ef9dd7e6ce" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.784566 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" podUID="61620225-2125-49da-94f6-f6ef9dd7e6ce" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.784617 4693 patch_prober.go:28] interesting pod/monitoring-plugin-78f748b45-xcpg8 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.784503 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" podUID="57409d4d-edf7-400c-9fcf-d6116ac22968" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.784636 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-78f748b45-xcpg8" podUID="f267d07b-2357-45ee-999d-94fad4f7bbce" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.79:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:38 crc kubenswrapper[4693]: I1212 17:14:38.799033 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Dec 12 17:14:39 crc kubenswrapper[4693]: I1212 17:14:39.218477 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-7qqwq" podUID="f24e6c5f-a222-44d6-8c2a-75b0d066e218" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:39 crc kubenswrapper[4693]: I1212 17:14:39.300537 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-7qqwq" podUID="f24e6c5f-a222-44d6-8c2a-75b0d066e218" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:39 crc kubenswrapper[4693]: I1212 17:14:39.300778 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-7qqwq" podUID="f24e6c5f-a222-44d6-8c2a-75b0d066e218" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:39 crc kubenswrapper[4693]: I1212 17:14:39.300636 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" podUID="5a6d7731-5c1e-4b9b-b847-6deabf3f6af9" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:39 crc kubenswrapper[4693]: I1212 17:14:39.301559 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" podUID="5a6d7731-5c1e-4b9b-b847-6deabf3f6af9" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:39 crc kubenswrapper[4693]: I1212 17:14:39.704769 4693 trace.go:236] Trace[604449450]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-ingester-0" (12-Dec-2025 17:14:35.002) (total time: 4701ms): Dec 12 17:14:39 crc kubenswrapper[4693]: Trace[604449450]: [4.701356454s] [4.701356454s] END Dec 12 17:14:39 crc kubenswrapper[4693]: I1212 17:14:39.786770 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="73559c8b-d017-4a5d-aced-3da25d264b0a" containerName="galera" probeResult="failure" output="command timed out" Dec 12 17:14:39 crc kubenswrapper[4693]: I1212 17:14:39.787987 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="73559c8b-d017-4a5d-aced-3da25d264b0a" containerName="galera" probeResult="failure" output="command timed out" Dec 12 17:14:39 crc kubenswrapper[4693]: I1212 17:14:39.788093 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="215ca5de-ce9f-4370-8aff-715dd1e384a3" containerName="ovn-northd" probeResult="failure" output="command timed out" Dec 12 17:14:39 crc kubenswrapper[4693]: I1212 17:14:39.788230 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="215ca5de-ce9f-4370-8aff-715dd1e384a3" containerName="ovn-northd" probeResult="failure" output="command timed out" Dec 12 17:14:39 crc kubenswrapper[4693]: I1212 17:14:39.788721 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ovn-northd-0" Dec 12 17:14:39 crc kubenswrapper[4693]: I1212 17:14:39.789104 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 12 17:14:39 crc kubenswrapper[4693]: I1212 17:14:39.807392 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ovn-northd" containerStatusID={"Type":"cri-o","ID":"b105c9ded490e7c88edc8ba7c3c4f5377050968b39ed6a553bf1ba3bce9814b9"} pod="openstack/ovn-northd-0" containerMessage="Container ovn-northd failed liveness probe, will be restarted" Dec 12 17:14:39 crc kubenswrapper[4693]: I1212 17:14:39.807542 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="215ca5de-ce9f-4370-8aff-715dd1e384a3" containerName="ovn-northd" containerID="cri-o://b105c9ded490e7c88edc8ba7c3c4f5377050968b39ed6a553bf1ba3bce9814b9" gracePeriod=30 Dec 12 17:14:40 crc kubenswrapper[4693]: I1212 17:14:40.068679 4693 patch_prober.go:28] interesting pod/thanos-querier-7f697d8f45-x28ts container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:40 crc kubenswrapper[4693]: I1212 17:14:40.069163 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" podUID="3808e52c-3efa-4017-b799-bc195fd1d611" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:40 crc kubenswrapper[4693]: I1212 17:14:40.217032 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hsxzq" Dec 12 17:14:40 crc kubenswrapper[4693]: I1212 17:14:40.245493 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-ndkbb" podUID="a0fbfcb7-b516-452f-be80-ddd275ed0987" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:40 crc kubenswrapper[4693]: I1212 17:14:40.245511 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-ndkbb" podUID="a0fbfcb7-b516-452f-be80-ddd275ed0987" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:40 crc kubenswrapper[4693]: I1212 17:14:40.377944 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fa56db-49e1-4138-b485-aaaa14e7ebdd-utilities\") pod \"f0fa56db-49e1-4138-b485-aaaa14e7ebdd\" (UID: \"f0fa56db-49e1-4138-b485-aaaa14e7ebdd\") " Dec 12 17:14:40 crc kubenswrapper[4693]: I1212 17:14:40.379134 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fa56db-49e1-4138-b485-aaaa14e7ebdd-catalog-content\") pod \"f0fa56db-49e1-4138-b485-aaaa14e7ebdd\" (UID: \"f0fa56db-49e1-4138-b485-aaaa14e7ebdd\") " Dec 12 17:14:40 crc kubenswrapper[4693]: I1212 17:14:40.379259 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bklrz\" (UniqueName: \"kubernetes.io/projected/f0fa56db-49e1-4138-b485-aaaa14e7ebdd-kube-api-access-bklrz\") pod \"f0fa56db-49e1-4138-b485-aaaa14e7ebdd\" (UID: \"f0fa56db-49e1-4138-b485-aaaa14e7ebdd\") " Dec 12 17:14:40 crc kubenswrapper[4693]: I1212 17:14:40.380414 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0fa56db-49e1-4138-b485-aaaa14e7ebdd-utilities" (OuterVolumeSpecName: "utilities") pod "f0fa56db-49e1-4138-b485-aaaa14e7ebdd" (UID: "f0fa56db-49e1-4138-b485-aaaa14e7ebdd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 17:14:40 crc kubenswrapper[4693]: I1212 17:14:40.414350 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0fa56db-49e1-4138-b485-aaaa14e7ebdd-kube-api-access-bklrz" (OuterVolumeSpecName: "kube-api-access-bklrz") pod "f0fa56db-49e1-4138-b485-aaaa14e7ebdd" (UID: "f0fa56db-49e1-4138-b485-aaaa14e7ebdd"). InnerVolumeSpecName "kube-api-access-bklrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 17:14:40 crc kubenswrapper[4693]: I1212 17:14:40.468370 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0fa56db-49e1-4138-b485-aaaa14e7ebdd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0fa56db-49e1-4138-b485-aaaa14e7ebdd" (UID: "f0fa56db-49e1-4138-b485-aaaa14e7ebdd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 17:14:40 crc kubenswrapper[4693]: I1212 17:14:40.483534 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fa56db-49e1-4138-b485-aaaa14e7ebdd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 17:14:40 crc kubenswrapper[4693]: I1212 17:14:40.483579 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bklrz\" (UniqueName: \"kubernetes.io/projected/f0fa56db-49e1-4138-b485-aaaa14e7ebdd-kube-api-access-bklrz\") on node \"crc\" DevicePath \"\"" Dec 12 17:14:40 crc kubenswrapper[4693]: I1212 17:14:40.483592 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fa56db-49e1-4138-b485-aaaa14e7ebdd-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 17:14:40 crc kubenswrapper[4693]: I1212 17:14:40.729854 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-5655c58dd6-dbkxt" podUID="09a2f99b-1398-4e56-ac77-ae6e4d9aaac8" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:40 crc kubenswrapper[4693]: I1212 17:14:40.740229 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hsxzq" Dec 12 17:14:40 crc kubenswrapper[4693]: I1212 17:14:40.740637 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hsxzq" event={"ID":"f0fa56db-49e1-4138-b485-aaaa14e7ebdd","Type":"ContainerDied","Data":"8643fa0676517f5102e1013d6ab5754ea711a74fb1d14c4363c1978c6048a800"} Dec 12 17:14:40 crc kubenswrapper[4693]: I1212 17:14:40.741544 4693 scope.go:117] "RemoveContainer" containerID="0c534982692b2a2a44f449c231e110caf743450f170d82e3eb5eacb8bfa04521" Dec 12 17:14:40 crc kubenswrapper[4693]: I1212 17:14:40.798095 4693 scope.go:117] "RemoveContainer" containerID="41f425ee5f0dbaf6baf70936e92a7189cc2fa68675ff6b82e72db2b8c04d3feb" Dec 12 17:14:40 crc kubenswrapper[4693]: I1212 17:14:40.830830 4693 scope.go:117] "RemoveContainer" containerID="f9dfc48197a06260c647523bcf23170cc494367c78d1e7855d6b3714c47db160" Dec 12 17:14:41 crc kubenswrapper[4693]: I1212 17:14:41.213176 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-f2llf" podUID="19868aeb-2fda-43a6-8801-7d72c8465394" containerName="registry-server" probeResult="failure" output=< Dec 12 17:14:41 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:14:41 crc kubenswrapper[4693]: > Dec 12 17:14:41 crc kubenswrapper[4693]: I1212 17:14:41.213187 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-f2llf" podUID="19868aeb-2fda-43a6-8801-7d72c8465394" containerName="registry-server" probeResult="failure" output=< Dec 12 17:14:41 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:14:41 crc kubenswrapper[4693]: > Dec 12 17:14:41 crc kubenswrapper[4693]: I1212 17:14:41.363823 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:14:41 crc kubenswrapper[4693]: E1212 17:14:41.373084 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:14:41 crc kubenswrapper[4693]: I1212 17:14:41.606986 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="215ca5de-ce9f-4370-8aff-715dd1e384a3" containerName="ovn-northd" probeResult="failure" output=< Dec 12 17:14:41 crc kubenswrapper[4693]: 2025-12-12T17:14:41Z|00001|unixctl|WARN|failed to connect to /tmp/ovn-northd.1.ctl Dec 12 17:14:41 crc kubenswrapper[4693]: ovn-appctl: cannot connect to "/tmp/ovn-northd.1.ctl" (No such file or directory) Dec 12 17:14:41 crc kubenswrapper[4693]: 2025-12-12T17:14:41Z|00001|unixctl|WARN|failed to connect to /tmp/ovn-northd.1.ctl Dec 12 17:14:41 crc kubenswrapper[4693]: ovn-appctl: cannot connect to "/tmp/ovn-northd.1.ctl" (No such file or directory) Dec 12 17:14:41 crc kubenswrapper[4693]: > Dec 12 17:14:41 crc kubenswrapper[4693]: I1212 17:14:41.789414 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="ec693a73-a415-42a1-98f4-86438aa58d56" containerName="galera" probeResult="failure" output="command timed out" Dec 12 17:14:41 crc kubenswrapper[4693]: I1212 17:14:41.789418 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="ec693a73-a415-42a1-98f4-86438aa58d56" containerName="galera" probeResult="failure" output="command timed out" Dec 12 17:14:41 crc kubenswrapper[4693]: I1212 17:14:41.917092 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-jstfj container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:41 crc kubenswrapper[4693]: I1212 17:14:41.917065 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-jstfj container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:41 crc kubenswrapper[4693]: I1212 17:14:41.917675 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" podUID="752b64e1-40d2-47cd-a555-0e23495e2443" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:41 crc kubenswrapper[4693]: I1212 17:14:41.917688 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" podUID="752b64e1-40d2-47cd-a555-0e23495e2443" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:41 crc kubenswrapper[4693]: I1212 17:14:41.938075 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-6fjrq container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:41 crc kubenswrapper[4693]: I1212 17:14:41.938147 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" podUID="34840dce-2cd9-4cf3-81cc-be2fb6e08993" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:41 crc kubenswrapper[4693]: I1212 17:14:41.938182 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-6fjrq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:41 crc kubenswrapper[4693]: I1212 17:14:41.938265 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" podUID="34840dce-2cd9-4cf3-81cc-be2fb6e08993" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:42 crc kubenswrapper[4693]: I1212 17:14:42.192515 4693 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-w6x8t container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:42 crc kubenswrapper[4693]: I1212 17:14:42.192845 4693 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-w6x8t container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:42 crc kubenswrapper[4693]: I1212 17:14:42.192914 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" podUID="62fa5de9-a571-40e5-a32c-e1708a428f19" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:42 crc kubenswrapper[4693]: I1212 17:14:42.196380 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" podUID="62fa5de9-a571-40e5-a32c-e1708a428f19" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:42 crc kubenswrapper[4693]: I1212 17:14:42.611911 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="cd076b50-0211-4876-b7df-b7140ebac121" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.2:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:42 crc kubenswrapper[4693]: I1212 17:14:42.653131 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4wp87" podUID="ac211615-b518-4011-be82-483cbb246d4b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:42 crc kubenswrapper[4693]: I1212 17:14:42.694510 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4wp87" podUID="ac211615-b518-4011-be82-483cbb246d4b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:42 crc kubenswrapper[4693]: I1212 17:14:42.857889 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-lz95h" podUID="267498f5-fa7b-44ec-bd94-361a261e8844" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:42 crc kubenswrapper[4693]: I1212 17:14:42.858034 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-lz95h" podUID="267498f5-fa7b-44ec-bd94-361a261e8844" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:42 crc kubenswrapper[4693]: I1212 17:14:42.858422 4693 patch_prober.go:28] interesting pod/oauth-openshift-db548d47c-z22tr container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:42 crc kubenswrapper[4693]: I1212 17:14:42.858445 4693 patch_prober.go:28] interesting pod/oauth-openshift-db548d47c-z22tr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:42 crc kubenswrapper[4693]: I1212 17:14:42.858682 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" podUID="70395fde-23f6-41b0-a04e-c4568b405e9d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:42 crc kubenswrapper[4693]: I1212 17:14:42.858734 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" podUID="70395fde-23f6-41b0-a04e-c4568b405e9d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:42 crc kubenswrapper[4693]: I1212 17:14:42.940655 4693 patch_prober.go:28] interesting pod/loki-operator-controller-manager-5c67884d5c-jpl4r container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.51:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:42 crc kubenswrapper[4693]: I1212 17:14:42.940729 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" podUID="ac702b24-9bff-4198-a7f7-e368773fb8de" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.51:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:42 crc kubenswrapper[4693]: I1212 17:14:42.940791 4693 patch_prober.go:28] interesting pod/controller-manager-6964955f74-9kcjr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:42 crc kubenswrapper[4693]: I1212 17:14:42.941187 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" podUID="4741216c-0a8d-4079-b459-cb459dc4f5b3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:42 crc kubenswrapper[4693]: I1212 17:14:42.940831 4693 patch_prober.go:28] interesting pod/controller-manager-6964955f74-9kcjr container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:42 crc kubenswrapper[4693]: I1212 17:14:42.941229 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" podUID="4741216c-0a8d-4079-b459-cb459dc4f5b3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:42 crc kubenswrapper[4693]: I1212 17:14:42.940849 4693 patch_prober.go:28] interesting pod/loki-operator-controller-manager-5c67884d5c-jpl4r container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.51:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:42 crc kubenswrapper[4693]: I1212 17:14:42.941259 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" podUID="ac702b24-9bff-4198-a7f7-e368773fb8de" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.51:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:43 crc kubenswrapper[4693]: I1212 17:14:43.268484 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bkfnh" podUID="4c46ca75-8071-4f2a-bda0-44bf851365cb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:43 crc kubenswrapper[4693]: I1212 17:14:43.268507 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-967d97867-cdd8s" podUID="8c0d2adb-6fec-4574-8733-b6e817a943e5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:43 crc kubenswrapper[4693]: I1212 17:14:43.268589 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-967d97867-cdd8s" podUID="8c0d2adb-6fec-4574-8733-b6e817a943e5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:43 crc kubenswrapper[4693]: I1212 17:14:43.268644 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bkfnh" podUID="4c46ca75-8071-4f2a-bda0-44bf851365cb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:43 crc kubenswrapper[4693]: I1212 17:14:43.683828 4693 patch_prober.go:28] interesting pod/route-controller-manager-66fdbf566b-4w29d container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:43 crc kubenswrapper[4693]: I1212 17:14:43.683837 4693 patch_prober.go:28] interesting pod/route-controller-manager-66fdbf566b-4w29d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:43 crc kubenswrapper[4693]: I1212 17:14:43.684186 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" podUID="41b94683-51bf-4720-9160-36bd373d88ba" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:43 crc kubenswrapper[4693]: I1212 17:14:43.684310 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" podUID="41b94683-51bf-4720-9160-36bd373d88ba" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:43 crc kubenswrapper[4693]: I1212 17:14:43.787734 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-78f8948974-4dnpb" podUID="463ba770-ab51-4445-8f63-bd5615ddb865" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:43 crc kubenswrapper[4693]: I1212 17:14:43.787819 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-78f8948974-4dnpb" podUID="463ba770-ab51-4445-8f63-bd5615ddb865" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:43 crc kubenswrapper[4693]: E1212 17:14:43.844882 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b105c9ded490e7c88edc8ba7c3c4f5377050968b39ed6a553bf1ba3bce9814b9" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 12 17:14:43 crc kubenswrapper[4693]: E1212 17:14:43.846555 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b105c9ded490e7c88edc8ba7c3c4f5377050968b39ed6a553bf1ba3bce9814b9" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 12 17:14:43 crc kubenswrapper[4693]: E1212 17:14:43.848142 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b105c9ded490e7c88edc8ba7c3c4f5377050968b39ed6a553bf1ba3bce9814b9" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 12 17:14:43 crc kubenswrapper[4693]: E1212 17:14:43.848199 4693 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="215ca5de-ce9f-4370-8aff-715dd1e384a3" containerName="ovn-northd" Dec 12 17:14:43 crc kubenswrapper[4693]: I1212 17:14:43.870613 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" podUID="5616685d-71d7-49b9-8c1b-6eccc11a74a1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:43 crc kubenswrapper[4693]: I1212 17:14:43.870766 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" podUID="5616685d-71d7-49b9-8c1b-6eccc11a74a1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:43 crc kubenswrapper[4693]: I1212 17:14:43.955574 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf" podUID="26537316-7b55-48dc-b952-bc2220120194" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:43 crc kubenswrapper[4693]: I1212 17:14:43.955736 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf" podUID="26537316-7b55-48dc-b952-bc2220120194" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:44 crc kubenswrapper[4693]: I1212 17:14:44.324084 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-q5jh6" podUID="2dabbc2a-64f3-4f25-8b65-17ed75c51801" containerName="registry-server" probeResult="failure" output=< Dec 12 17:14:44 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:14:44 crc kubenswrapper[4693]: > Dec 12 17:14:44 crc kubenswrapper[4693]: I1212 17:14:44.324083 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-sbjt9" podUID="cabce219-6d9f-4aac-9402-ecf80e930f68" containerName="registry-server" probeResult="failure" output=< Dec 12 17:14:44 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:14:44 crc kubenswrapper[4693]: > Dec 12 17:14:44 crc kubenswrapper[4693]: I1212 17:14:44.324879 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-sbjt9" podUID="cabce219-6d9f-4aac-9402-ecf80e930f68" containerName="registry-server" probeResult="failure" output=< Dec 12 17:14:44 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:14:44 crc kubenswrapper[4693]: > Dec 12 17:14:44 crc kubenswrapper[4693]: I1212 17:14:44.523005 4693 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-zwt44 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:44 crc kubenswrapper[4693]: I1212 17:14:44.523112 4693 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-zwt44 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:44 crc kubenswrapper[4693]: I1212 17:14:44.523162 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" podUID="cae4711d-6ae1-402f-9fc8-751998ed785d" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:44 crc kubenswrapper[4693]: I1212 17:14:44.523178 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" podUID="cae4711d-6ae1-402f-9fc8-751998ed785d" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:44 crc kubenswrapper[4693]: I1212 17:14:44.607983 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-bhqb5" podUID="60ada46e-eb41-4339-a653-610721982c81" containerName="registry-server" probeResult="failure" output=< Dec 12 17:14:44 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:14:44 crc kubenswrapper[4693]: > Dec 12 17:14:44 crc kubenswrapper[4693]: I1212 17:14:44.706349 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-wwxcz" podUID="ef8bea0e-6f25-4c4d-a294-f246fbff9926" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 17:14:44 crc kubenswrapper[4693]: I1212 17:14:44.809669 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Dec 12 17:14:44 crc kubenswrapper[4693]: I1212 17:14:44.815221 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Dec 12 17:14:44 crc kubenswrapper[4693]: I1212 17:14:44.817574 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"b715f8597df5123d0b0521b7ea55d22bfbb06bec445d205213530d7ca9177bce"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Dec 12 17:14:44 crc kubenswrapper[4693]: I1212 17:14:44.817909 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40" containerName="ceilometer-central-agent" containerID="cri-o://b715f8597df5123d0b0521b7ea55d22bfbb06bec445d205213530d7ca9177bce" gracePeriod=30 Dec 12 17:14:44 crc kubenswrapper[4693]: I1212 17:14:44.822263 4693 trace.go:236] Trace[1126091520]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (12-Dec-2025 17:14:42.426) (total time: 2394ms): Dec 12 17:14:44 crc kubenswrapper[4693]: Trace[1126091520]: [2.394230141s] [2.394230141s] END Dec 12 17:14:44 crc kubenswrapper[4693]: I1212 17:14:44.921196 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-bhqb5" podUID="60ada46e-eb41-4339-a653-610721982c81" containerName="registry-server" probeResult="failure" output=< Dec 12 17:14:44 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:14:44 crc kubenswrapper[4693]: > Dec 12 17:14:44 crc kubenswrapper[4693]: I1212 17:14:44.921239 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-q5jh6" podUID="2dabbc2a-64f3-4f25-8b65-17ed75c51801" containerName="registry-server" probeResult="failure" output=< Dec 12 17:14:44 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:14:44 crc kubenswrapper[4693]: > Dec 12 17:14:45 crc kubenswrapper[4693]: I1212 17:14:45.067596 4693 patch_prober.go:28] interesting pod/thanos-querier-7f697d8f45-x28ts container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:45 crc kubenswrapper[4693]: I1212 17:14:45.067696 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" podUID="3808e52c-3efa-4017-b799-bc195fd1d611" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:45 crc kubenswrapper[4693]: I1212 17:14:45.178025 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-x27t6" podUID="bb6e1c71-15d0-4078-837b-0d0d7c9e981f" containerName="registry-server" probeResult="failure" output=< Dec 12 17:14:45 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:14:45 crc kubenswrapper[4693]: > Dec 12 17:14:45 crc kubenswrapper[4693]: I1212 17:14:45.180971 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-x27t6" podUID="bb6e1c71-15d0-4078-837b-0d0d7c9e981f" containerName="registry-server" probeResult="failure" output=< Dec 12 17:14:45 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:14:45 crc kubenswrapper[4693]: > Dec 12 17:14:45 crc kubenswrapper[4693]: I1212 17:14:45.389575 4693 patch_prober.go:28] interesting pod/console-65dd6d4bcb-h8fs2 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:45 crc kubenswrapper[4693]: I1212 17:14:45.389660 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-65dd6d4bcb-h8fs2" podUID="ac10c353-ed34-4f82-ad22-dc0065fbb96e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:45 crc kubenswrapper[4693]: I1212 17:14:45.516499 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" podUID="51d89b29-7872-4e9d-9fdd-b1fdd7de6de3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:45 crc kubenswrapper[4693]: I1212 17:14:45.808990 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" podUID="ffd9365b-fb4c-4a2b-a168-fcf9cce89228" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:46 crc kubenswrapper[4693]: I1212 17:14:46.218086 4693 patch_prober.go:28] interesting pod/logging-loki-distributor-76cc67bf56-fsnzk container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:46 crc kubenswrapper[4693]: I1212 17:14:46.218787 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" podUID="fee116a3-18ff-4755-b34f-82baa25eeefd" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:46 crc kubenswrapper[4693]: I1212 17:14:46.352424 4693 patch_prober.go:28] interesting pod/logging-loki-querier-5895d59bb8-gcwwx container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:46 crc kubenswrapper[4693]: I1212 17:14:46.352495 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" podUID="b89219b7-2b92-44d8-897c-beb5ef9d6861" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:46 crc kubenswrapper[4693]: I1212 17:14:46.457433 4693 patch_prober.go:28] interesting pod/logging-loki-query-frontend-84558f7c9f-ht5fw container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:46 crc kubenswrapper[4693]: I1212 17:14:46.457529 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" podUID="2691b993-4939-4cf2-84ab-1d34ea3dded9" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:46 crc kubenswrapper[4693]: I1212 17:14:46.789703 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="cebb8554-fc88-4ab5-b2d9-61495b3648f6" containerName="prometheus" probeResult="failure" output="command timed out" Dec 12 17:14:46 crc kubenswrapper[4693]: I1212 17:14:46.789944 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="cebb8554-fc88-4ab5-b2d9-61495b3648f6" containerName="prometheus" probeResult="failure" output="command timed out" Dec 12 17:14:46 crc kubenswrapper[4693]: I1212 17:14:46.864422 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_215ca5de-ce9f-4370-8aff-715dd1e384a3/ovn-northd/0.log" Dec 12 17:14:46 crc kubenswrapper[4693]: I1212 17:14:46.864789 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"215ca5de-ce9f-4370-8aff-715dd1e384a3","Type":"ContainerDied","Data":"b105c9ded490e7c88edc8ba7c3c4f5377050968b39ed6a553bf1ba3bce9814b9"} Dec 12 17:14:46 crc kubenswrapper[4693]: I1212 17:14:46.864863 4693 generic.go:334] "Generic (PLEG): container finished" podID="215ca5de-ce9f-4370-8aff-715dd1e384a3" containerID="b105c9ded490e7c88edc8ba7c3c4f5377050968b39ed6a553bf1ba3bce9814b9" exitCode=139 Dec 12 17:14:46 crc kubenswrapper[4693]: I1212 17:14:46.916690 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-jstfj container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:46 crc kubenswrapper[4693]: I1212 17:14:46.916792 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" podUID="752b64e1-40d2-47cd-a555-0e23495e2443" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:46 crc kubenswrapper[4693]: I1212 17:14:46.936474 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-6fjrq container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:46 crc kubenswrapper[4693]: I1212 17:14:46.936566 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" podUID="34840dce-2cd9-4cf3-81cc-be2fb6e08993" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:47 crc kubenswrapper[4693]: I1212 17:14:47.365684 4693 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:47 crc kubenswrapper[4693]: I1212 17:14:47.365762 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="c8c936c1-e205-423b-960f-af3894b107df" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:47 crc kubenswrapper[4693]: I1212 17:14:47.521856 4693 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.77:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:47 crc kubenswrapper[4693]: I1212 17:14:47.522379 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="54ae60ed-431d-4995-a2f7-564738343760" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.77:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:47 crc kubenswrapper[4693]: I1212 17:14:47.640957 4693 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:47 crc kubenswrapper[4693]: I1212 17:14:47.641377 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:47 crc kubenswrapper[4693]: I1212 17:14:47.916480 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" podUID="26cbeab5-89ba-425a-87c0-8797ceb6957a" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:47 crc kubenswrapper[4693]: I1212 17:14:47.916565 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" podUID="26cbeab5-89ba-425a-87c0-8797ceb6957a" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:47 crc kubenswrapper[4693]: I1212 17:14:47.923710 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="03515034-0f60-4e96-b2cc-9784f6e07887" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.161:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:47 crc kubenswrapper[4693]: I1212 17:14:47.923777 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="03515034-0f60-4e96-b2cc-9784f6e07887" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.161:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.034465 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.034491 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.034794 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.034736 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.220837 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-8cb8d8774-9lkdf" podUID="1da03bb8-e1d0-4e14-9a78-c5bcca1a191f" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.99:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.262436 4693 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-wz942 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.262524 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" podUID="87e8f397-20cd-469f-924d-204ce1a8db47" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.391634 4693 patch_prober.go:28] interesting pod/downloads-7954f5f757-bz9v2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.392002 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bz9v2" podUID="586d0874-ebe1-41db-b596-1dfed12b2b94" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.392338 4693 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hhs2z container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.392390 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" podUID="38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.392440 4693 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hhs2z container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.392452 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" podUID="38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.392478 4693 patch_prober.go:28] interesting pod/console-operator-58897d9998-wcl2w container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.392489 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" podUID="f51bf74b-1d86-4a22-a355-f2c64a6516e5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.392518 4693 patch_prober.go:28] interesting pod/console-operator-58897d9998-wcl2w container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.392530 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" podUID="f51bf74b-1d86-4a22-a355-f2c64a6516e5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.392555 4693 patch_prober.go:28] interesting pod/metrics-server-66fcfb545d-whswt container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.392546 4693 patch_prober.go:28] interesting pod/downloads-7954f5f757-bz9v2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.31:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.392581 4693 patch_prober.go:28] interesting pod/metrics-server-66fcfb545d-whswt container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.392593 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" podUID="cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.392568 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" podUID="cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.392634 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.392619 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bz9v2" podUID="586d0874-ebe1-41db-b596-1dfed12b2b94" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.412285 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="metrics-server" containerStatusID={"Type":"cri-o","ID":"5915a36368dfe5a7b12fb280ceb2ddc12c9a9dadd30ab5b7207f97204a71c11e"} pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" containerMessage="Container metrics-server failed liveness probe, will be restarted" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.416352 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" podUID="cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0" containerName="metrics-server" containerID="cri-o://5915a36368dfe5a7b12fb280ceb2ddc12c9a9dadd30ab5b7207f97204a71c11e" gracePeriod=170 Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.439558 4693 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-pqr7h container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.439665 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pqr7h" podUID="80dd1d93-b2bd-4fad-b199-aa072c2c8216" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.482389 4693 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-pqr7h container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.482456 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pqr7h" podUID="80dd1d93-b2bd-4fad-b199-aa072c2c8216" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.658434 4693 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r6qvl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.659610 4693 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r6qvl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.659652 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl" podUID="d3b86c37-5764-4b23-b927-ad4a77885456" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.658508 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl" podUID="d3b86c37-5764-4b23-b927-ad4a77885456" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.704232 4693 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-d86dw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.704468 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" podUID="61620225-2125-49da-94f6-f6ef9dd7e6ce" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.742480 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" podUID="57409d4d-edf7-400c-9fcf-d6116ac22968" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.742502 4693 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-d86dw container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.742577 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" podUID="61620225-2125-49da-94f6-f6ef9dd7e6ce" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.742598 4693 patch_prober.go:28] interesting pod/monitoring-plugin-78f748b45-xcpg8 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.742658 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-78f748b45-xcpg8" podUID="f267d07b-2357-45ee-999d-94fad4f7bbce" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.79:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:48 crc kubenswrapper[4693]: I1212 17:14:48.742733 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-78f748b45-xcpg8" Dec 12 17:14:48 crc kubenswrapper[4693]: E1212 17:14:48.841547 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b105c9ded490e7c88edc8ba7c3c4f5377050968b39ed6a553bf1ba3bce9814b9 is running failed: container process not found" containerID="b105c9ded490e7c88edc8ba7c3c4f5377050968b39ed6a553bf1ba3bce9814b9" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 12 17:14:48 crc kubenswrapper[4693]: E1212 17:14:48.841971 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b105c9ded490e7c88edc8ba7c3c4f5377050968b39ed6a553bf1ba3bce9814b9 is running failed: container process not found" containerID="b105c9ded490e7c88edc8ba7c3c4f5377050968b39ed6a553bf1ba3bce9814b9" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 12 17:14:48 crc kubenswrapper[4693]: E1212 17:14:48.842157 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b105c9ded490e7c88edc8ba7c3c4f5377050968b39ed6a553bf1ba3bce9814b9 is running failed: container process not found" containerID="b105c9ded490e7c88edc8ba7c3c4f5377050968b39ed6a553bf1ba3bce9814b9" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 12 17:14:48 crc kubenswrapper[4693]: E1212 17:14:48.842187 4693 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b105c9ded490e7c88edc8ba7c3c4f5377050968b39ed6a553bf1ba3bce9814b9 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="215ca5de-ce9f-4370-8aff-715dd1e384a3" containerName="ovn-northd" Dec 12 17:14:49 crc kubenswrapper[4693]: I1212 17:14:49.217585 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-7qqwq" podUID="f24e6c5f-a222-44d6-8c2a-75b0d066e218" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:49 crc kubenswrapper[4693]: I1212 17:14:49.301533 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" podUID="5a6d7731-5c1e-4b9b-b847-6deabf3f6af9" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:49 crc kubenswrapper[4693]: I1212 17:14:49.301649 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-7qqwq" podUID="f24e6c5f-a222-44d6-8c2a-75b0d066e218" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:49 crc kubenswrapper[4693]: I1212 17:14:49.304420 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-7qqwq" podUID="f24e6c5f-a222-44d6-8c2a-75b0d066e218" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:49 crc kubenswrapper[4693]: I1212 17:14:49.304513 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-7qqwq" Dec 12 17:14:49 crc kubenswrapper[4693]: I1212 17:14:49.304542 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" podUID="5a6d7731-5c1e-4b9b-b847-6deabf3f6af9" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:49 crc kubenswrapper[4693]: I1212 17:14:49.332131 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"20587660f0b1976afd8b689326d141a7a9ae1e6c7871d869a9aef05662aa3318"} pod="metallb-system/frr-k8s-7qqwq" containerMessage="Container frr failed liveness probe, will be restarted" Dec 12 17:14:49 crc kubenswrapper[4693]: I1212 17:14:49.332297 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-7qqwq" podUID="f24e6c5f-a222-44d6-8c2a-75b0d066e218" containerName="frr" containerID="cri-o://20587660f0b1976afd8b689326d141a7a9ae1e6c7871d869a9aef05662aa3318" gracePeriod=2 Dec 12 17:14:49 crc kubenswrapper[4693]: I1212 17:14:49.386573 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-5bddd4b946-jdmtk" podUID="a1424cfc-ad10-45e2-b69f-f313e29e5b58" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.95:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:49 crc kubenswrapper[4693]: I1212 17:14:49.387175 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-5bddd4b946-jdmtk" podUID="a1424cfc-ad10-45e2-b69f-f313e29e5b58" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.95:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:49 crc kubenswrapper[4693]: I1212 17:14:49.743171 4693 patch_prober.go:28] interesting pod/monitoring-plugin-78f748b45-xcpg8 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:49 crc kubenswrapper[4693]: I1212 17:14:49.743594 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-78f748b45-xcpg8" podUID="f267d07b-2357-45ee-999d-94fad4f7bbce" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.79:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:49 crc kubenswrapper[4693]: I1212 17:14:49.806867 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="73559c8b-d017-4a5d-aced-3da25d264b0a" containerName="galera" probeResult="failure" output="command timed out" Dec 12 17:14:49 crc kubenswrapper[4693]: I1212 17:14:49.806956 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Dec 12 17:14:49 crc kubenswrapper[4693]: I1212 17:14:49.806891 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="73559c8b-d017-4a5d-aced-3da25d264b0a" containerName="galera" probeResult="failure" output="command timed out" Dec 12 17:14:49 crc kubenswrapper[4693]: I1212 17:14:49.807393 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 12 17:14:49 crc kubenswrapper[4693]: I1212 17:14:49.808223 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"ecf679915acb4c8bd632299eb618fc9ab601608b029a46cdfa94cef79871b750"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Dec 12 17:14:49 crc kubenswrapper[4693]: I1212 17:14:49.933840 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_215ca5de-ce9f-4370-8aff-715dd1e384a3/ovn-northd/0.log" Dec 12 17:14:49 crc kubenswrapper[4693]: I1212 17:14:49.933906 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"215ca5de-ce9f-4370-8aff-715dd1e384a3","Type":"ContainerStarted","Data":"1c357926bd12c9dffb5a2806d85c419905fb924e4120c5c467b3c1cc386f752b"} Dec 12 17:14:49 crc kubenswrapper[4693]: I1212 17:14:49.934179 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 12 17:14:50 crc kubenswrapper[4693]: I1212 17:14:50.022462 4693 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-t9zpr container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.13:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:50 crc kubenswrapper[4693]: I1212 17:14:50.022480 4693 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-t9zpr container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.13:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:50 crc kubenswrapper[4693]: I1212 17:14:50.022519 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" podUID="51db3b1a-8b64-47d6-b09c-a8356e855606" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.13:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:50 crc kubenswrapper[4693]: I1212 17:14:50.022548 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" podUID="51db3b1a-8b64-47d6-b09c-a8356e855606" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.13:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:50 crc kubenswrapper[4693]: I1212 17:14:50.068200 4693 patch_prober.go:28] interesting pod/thanos-querier-7f697d8f45-x28ts container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:50 crc kubenswrapper[4693]: I1212 17:14:50.068261 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" podUID="3808e52c-3efa-4017-b799-bc195fd1d611" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:50 crc kubenswrapper[4693]: I1212 17:14:50.143459 4693 patch_prober.go:28] interesting pod/perses-operator-5446b9c989-clst5 container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.27:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:50 crc kubenswrapper[4693]: I1212 17:14:50.143805 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5446b9c989-clst5" podUID="2dd59d35-d975-49a2-8a23-db068e921965" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:50 crc kubenswrapper[4693]: I1212 17:14:50.245437 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-ndkbb" podUID="a0fbfcb7-b516-452f-be80-ddd275ed0987" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:50 crc kubenswrapper[4693]: I1212 17:14:50.245438 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-ndkbb" podUID="a0fbfcb7-b516-452f-be80-ddd275ed0987" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:50 crc kubenswrapper[4693]: I1212 17:14:50.343454 4693 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mglqp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:50 crc kubenswrapper[4693]: I1212 17:14:50.343538 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mglqp" podUID="fef1de87-a0ab-4a6e-9b37-d446cf2ec47e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:50 crc kubenswrapper[4693]: I1212 17:14:50.343612 4693 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mglqp container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:50 crc kubenswrapper[4693]: I1212 17:14:50.343700 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-mglqp" podUID="fef1de87-a0ab-4a6e-9b37-d446cf2ec47e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:50 crc kubenswrapper[4693]: I1212 17:14:50.950756 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qqwq" event={"ID":"f24e6c5f-a222-44d6-8c2a-75b0d066e218","Type":"ContainerDied","Data":"20587660f0b1976afd8b689326d141a7a9ae1e6c7871d869a9aef05662aa3318"} Dec 12 17:14:50 crc kubenswrapper[4693]: I1212 17:14:50.951120 4693 generic.go:334] "Generic (PLEG): container finished" podID="f24e6c5f-a222-44d6-8c2a-75b0d066e218" containerID="20587660f0b1976afd8b689326d141a7a9ae1e6c7871d869a9aef05662aa3318" exitCode=143 Dec 12 17:14:51 crc kubenswrapper[4693]: I1212 17:14:51.404466 4693 patch_prober.go:28] interesting pod/nmstate-webhook-f8fb84555-4jrn4 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.87:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:51 crc kubenswrapper[4693]: I1212 17:14:51.404548 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-f8fb84555-4jrn4" podUID="18a4d414-e872-4bb3-ae29-166fcc455a9a" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.87:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:51 crc kubenswrapper[4693]: I1212 17:14:51.814802 4693 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:51 crc kubenswrapper[4693]: I1212 17:14:51.815128 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:51 crc kubenswrapper[4693]: I1212 17:14:51.815265 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="ec693a73-a415-42a1-98f4-86438aa58d56" containerName="galera" probeResult="failure" output="command timed out" Dec 12 17:14:51 crc kubenswrapper[4693]: I1212 17:14:51.815310 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="cebb8554-fc88-4ab5-b2d9-61495b3648f6" containerName="prometheus" probeResult="failure" output="command timed out" Dec 12 17:14:51 crc kubenswrapper[4693]: I1212 17:14:51.815467 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="ec693a73-a415-42a1-98f4-86438aa58d56" containerName="galera" probeResult="failure" output="command timed out" Dec 12 17:14:51 crc kubenswrapper[4693]: I1212 17:14:51.815717 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 12 17:14:51 crc kubenswrapper[4693]: I1212 17:14:51.815769 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 12 17:14:51 crc kubenswrapper[4693]: I1212 17:14:51.816359 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="cebb8554-fc88-4ab5-b2d9-61495b3648f6" containerName="prometheus" probeResult="failure" output="command timed out" Dec 12 17:14:51 crc kubenswrapper[4693]: I1212 17:14:51.819676 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-z5g4l" podUID="df1e4454-429c-4d2a-b372-b33ee0e88e6b" containerName="nmstate-handler" probeResult="failure" output="command timed out" Dec 12 17:14:51 crc kubenswrapper[4693]: I1212 17:14:51.819582 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"ee2909168c683d47031dfb2427036a864486334bf9662c1d0420c4430180999b"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Dec 12 17:14:51 crc kubenswrapper[4693]: I1212 17:14:51.853258 4693 patch_prober.go:28] interesting pod/apiserver-76f77b778f-d28jp container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:51 crc kubenswrapper[4693]: I1212 17:14:51.853346 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-d28jp" podUID="31b7f38e-5f91-43bf-bba4-bc8592747704" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:51 crc kubenswrapper[4693]: I1212 17:14:51.916722 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-jstfj container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:51 crc kubenswrapper[4693]: I1212 17:14:51.916831 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" podUID="752b64e1-40d2-47cd-a555-0e23495e2443" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:51 crc kubenswrapper[4693]: I1212 17:14:51.936786 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-6fjrq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:51 crc kubenswrapper[4693]: I1212 17:14:51.936943 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" podUID="34840dce-2cd9-4cf3-81cc-be2fb6e08993" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.655576 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4wp87" podUID="ac211615-b518-4011-be82-483cbb246d4b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.655615 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="cd076b50-0211-4876-b7df-b7140ebac121" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.2:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.655587 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="cd076b50-0211-4876-b7df-b7140ebac121" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.2:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.788584 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="ec693a73-a415-42a1-98f4-86438aa58d56" containerName="galera" probeResult="failure" output="command timed out" Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.791136 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-f2llf" podUID="19868aeb-2fda-43a6-8801-7d72c8465394" containerName="registry-server" probeResult="failure" output="command timed out" Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.794993 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-f2llf" podUID="19868aeb-2fda-43a6-8801-7d72c8465394" containerName="registry-server" probeResult="failure" output="command timed out" Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.804587 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-q7mdw" podUID="f56863f1-3f85-4c6f-a2a6-81f0ee3b6317" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.804706 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-5w2f8" podUID="0f300296-9b08-4fcc-9933-a752304b3188" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.846440 4693 patch_prober.go:28] interesting pod/oauth-openshift-db548d47c-z22tr container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.846456 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-lz95h" podUID="267498f5-fa7b-44ec-bd94-361a261e8844" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.846488 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" podUID="70395fde-23f6-41b0-a04e-c4568b405e9d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.846529 4693 patch_prober.go:28] interesting pod/oauth-openshift-db548d47c-z22tr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.846617 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" podUID="70395fde-23f6-41b0-a04e-c4568b405e9d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.887455 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nrfbt" podUID="fe8f2a92-e87a-40d4-b96b-0e0af6443656" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.928441 4693 patch_prober.go:28] interesting pod/controller-manager-6964955f74-9kcjr container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.928482 4693 patch_prober.go:28] interesting pod/loki-operator-controller-manager-5c67884d5c-jpl4r container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.51:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.928512 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" podUID="4741216c-0a8d-4079-b459-cb459dc4f5b3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.928541 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" podUID="ac702b24-9bff-4198-a7f7-e368773fb8de" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.51:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.928571 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.928572 4693 patch_prober.go:28] interesting pod/controller-manager-6964955f74-9kcjr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.928604 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" podUID="4741216c-0a8d-4079-b459-cb459dc4f5b3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.928784 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="03515034-0f60-4e96-b2cc-9784f6e07887" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.161:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.928869 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="03515034-0f60-4e96-b2cc-9784f6e07887" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.161:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.930246 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"596d34b0d0c5f7955cda287604606de90aad230ea92bd147f6351762f75fc764"} pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" containerMessage="Container controller-manager failed liveness probe, will be restarted" Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.930324 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" podUID="4741216c-0a8d-4079-b459-cb459dc4f5b3" containerName="controller-manager" containerID="cri-o://596d34b0d0c5f7955cda287604606de90aad230ea92bd147f6351762f75fc764" gracePeriod=30 Dec 12 17:14:52 crc kubenswrapper[4693]: I1212 17:14:52.978582 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qqwq" event={"ID":"f24e6c5f-a222-44d6-8c2a-75b0d066e218","Type":"ContainerStarted","Data":"2ad91039d2c7d3bbeb6ae1f050e8a74ba26ef32871d1657542b9ac665f0440b3"} Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.093559 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-7qqwq" Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.227449 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-967d97867-cdd8s" podUID="8c0d2adb-6fec-4574-8733-b6e817a943e5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.227531 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bkfnh" podUID="4c46ca75-8071-4f2a-bda0-44bf851365cb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.319613 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-pt4rg" podUID="f73b5773-7bac-41ae-af91-0e504b5a234f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.361087 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-cb6hq" podUID="b3dfd27f-9569-444d-a917-04c7f4c67ec9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.401527 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cg948" podUID="f8bfea4b-063c-461e-9116-63d76fd06130" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.458485 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-w5zgk" podUID="fc9969a7-d068-499a-90a4-571822a60c5b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.504478 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9jcss" podUID="a41df83d-6bb2-4c49-a431-f5851036a44d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.578425 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d" podUID="96adb3fc-0bd6-44a0-9a3a-3bae3aa3a30c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.683643 4693 patch_prober.go:28] interesting pod/route-controller-manager-66fdbf566b-4w29d container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.683696 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" podUID="41b94683-51bf-4720-9160-36bd373d88ba" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.683740 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.683833 4693 patch_prober.go:28] interesting pod/route-controller-manager-66fdbf566b-4w29d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.683871 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" podUID="41b94683-51bf-4720-9160-36bd373d88ba" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.684656 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"05594e0e6948065007460b75a1cefd9b075372db15fd8c62321575e79d1e6ee9"} pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.684697 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" podUID="41b94683-51bf-4720-9160-36bd373d88ba" containerName="route-controller-manager" containerID="cri-o://05594e0e6948065007460b75a1cefd9b075372db15fd8c62321575e79d1e6ee9" gracePeriod=30 Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.828464 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-wsr9h" podUID="847f97b7-84be-4d2a-a699-30ca49fd1023" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.828489 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" podUID="5616685d-71d7-49b9-8c1b-6eccc11a74a1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.828583 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-78f8948974-4dnpb" podUID="463ba770-ab51-4445-8f63-bd5615ddb865" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.828610 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.881730 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf" podUID="26537316-7b55-48dc-b952-bc2220120194" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.881841 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf" Dec 12 17:14:53 crc kubenswrapper[4693]: I1212 17:14:53.951476 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-clqpc" podUID="fc2504e1-7808-49ef-9df0-2fda81f786f6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:54 crc kubenswrapper[4693]: I1212 17:14:54.134647 4693 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-7qqwq" podUID="f24e6c5f-a222-44d6-8c2a-75b0d066e218" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:54 crc kubenswrapper[4693]: I1212 17:14:54.522928 4693 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-zwt44 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:54 crc kubenswrapper[4693]: I1212 17:14:54.523046 4693 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-zwt44 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:54 crc kubenswrapper[4693]: I1212 17:14:54.523105 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" podUID="cae4711d-6ae1-402f-9fc8-751998ed785d" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:54 crc kubenswrapper[4693]: I1212 17:14:54.523092 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" podUID="cae4711d-6ae1-402f-9fc8-751998ed785d" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:54 crc kubenswrapper[4693]: I1212 17:14:54.523182 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" Dec 12 17:14:54 crc kubenswrapper[4693]: I1212 17:14:54.523234 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" Dec 12 17:14:54 crc kubenswrapper[4693]: I1212 17:14:54.525322 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus-operator-admission-webhook" containerStatusID={"Type":"cri-o","ID":"11c8d1c7d9c661001bf7db151b3c68c56edce7a8688ad7a11c612875c0194270"} pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" containerMessage="Container prometheus-operator-admission-webhook failed liveness probe, will be restarted" Dec 12 17:14:54 crc kubenswrapper[4693]: I1212 17:14:54.525441 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" podUID="cae4711d-6ae1-402f-9fc8-751998ed785d" containerName="prometheus-operator-admission-webhook" containerID="cri-o://11c8d1c7d9c661001bf7db151b3c68c56edce7a8688ad7a11c612875c0194270" gracePeriod=30 Dec 12 17:14:54 crc kubenswrapper[4693]: I1212 17:14:54.802856 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6676c589bf-7kphf" Dec 12 17:14:54 crc kubenswrapper[4693]: I1212 17:14:54.870487 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" podUID="5616685d-71d7-49b9-8c1b-6eccc11a74a1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:55 crc kubenswrapper[4693]: I1212 17:14:55.067837 4693 patch_prober.go:28] interesting pod/thanos-querier-7f697d8f45-x28ts container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:55 crc kubenswrapper[4693]: I1212 17:14:55.067908 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" podUID="3808e52c-3efa-4017-b799-bc195fd1d611" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:55 crc kubenswrapper[4693]: I1212 17:14:55.358138 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:14:55 crc kubenswrapper[4693]: E1212 17:14:55.358481 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:14:55 crc kubenswrapper[4693]: I1212 17:14:55.387963 4693 patch_prober.go:28] interesting pod/console-65dd6d4bcb-h8fs2 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:55 crc kubenswrapper[4693]: I1212 17:14:55.388371 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-65dd6d4bcb-h8fs2" podUID="ac10c353-ed34-4f82-ad22-dc0065fbb96e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:55 crc kubenswrapper[4693]: I1212 17:14:55.558646 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" podUID="51d89b29-7872-4e9d-9fdd-b1fdd7de6de3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:55 crc kubenswrapper[4693]: I1212 17:14:55.558710 4693 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-zwt44 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:55 crc kubenswrapper[4693]: I1212 17:14:55.558789 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" podUID="cae4711d-6ae1-402f-9fc8-751998ed785d" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:55 crc kubenswrapper[4693]: I1212 17:14:55.558994 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" podUID="51d89b29-7872-4e9d-9fdd-b1fdd7de6de3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:55 crc kubenswrapper[4693]: I1212 17:14:55.559153 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" Dec 12 17:14:55 crc kubenswrapper[4693]: I1212 17:14:55.772476 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-5655c58dd6-dbkxt" podUID="09a2f99b-1398-4e56-ac77-ae6e4d9aaac8" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:55 crc kubenswrapper[4693]: I1212 17:14:55.772494 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-5655c58dd6-dbkxt" podUID="09a2f99b-1398-4e56-ac77-ae6e4d9aaac8" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:55 crc kubenswrapper[4693]: I1212 17:14:55.856536 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" podUID="ffd9365b-fb4c-4a2b-a168-fcf9cce89228" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:55 crc kubenswrapper[4693]: I1212 17:14:55.856613 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" podUID="ffd9365b-fb4c-4a2b-a168-fcf9cce89228" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:55 crc kubenswrapper[4693]: I1212 17:14:55.856669 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.214057 4693 patch_prober.go:28] interesting pod/logging-loki-distributor-76cc67bf56-fsnzk container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.214293 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" podUID="fee116a3-18ff-4755-b34f-82baa25eeefd" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.214363 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.352138 4693 patch_prober.go:28] interesting pod/logging-loki-querier-5895d59bb8-gcwwx container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.352218 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" podUID="b89219b7-2b92-44d8-897c-beb5ef9d6861" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.352323 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.457237 4693 patch_prober.go:28] interesting pod/logging-loki-query-frontend-84558f7c9f-ht5fw container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.457321 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" podUID="2691b993-4939-4cf2-84ab-1d34ea3dded9" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.457414 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.601467 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" podUID="51d89b29-7872-4e9d-9fdd-b1fdd7de6de3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.804721 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-q5jh6" podUID="2dabbc2a-64f3-4f25-8b65-17ed75c51801" containerName="registry-server" probeResult="failure" output="command timed out" Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.804720 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="cebb8554-fc88-4ab5-b2d9-61495b3648f6" containerName="prometheus" probeResult="failure" output="command timed out" Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.804796 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-z5g4l" podUID="df1e4454-429c-4d2a-b372-b33ee0e88e6b" containerName="nmstate-handler" probeResult="failure" output="command timed out" Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.804819 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-sbjt9" podUID="cabce219-6d9f-4aac-9402-ecf80e930f68" containerName="registry-server" probeResult="failure" output="command timed out" Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.804836 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-sbjt9" podUID="cabce219-6d9f-4aac-9402-ecf80e930f68" containerName="registry-server" probeResult="failure" output="command timed out" Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.804860 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-q5jh6" podUID="2dabbc2a-64f3-4f25-8b65-17ed75c51801" containerName="registry-server" probeResult="failure" output="command timed out" Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.804880 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="cebb8554-fc88-4ab5-b2d9-61495b3648f6" containerName="prometheus" probeResult="failure" output="command timed out" Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.804941 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.898524 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" podUID="ffd9365b-fb4c-4a2b-a168-fcf9cce89228" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.916587 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-jstfj container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.916644 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" podUID="752b64e1-40d2-47cd-a555-0e23495e2443" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.916664 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-jstfj container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.916737 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" podUID="752b64e1-40d2-47cd-a555-0e23495e2443" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.936464 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-6fjrq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.936461 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-6fjrq container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.936526 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" podUID="34840dce-2cd9-4cf3-81cc-be2fb6e08993" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:56 crc kubenswrapper[4693]: I1212 17:14:56.936577 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" podUID="34840dce-2cd9-4cf3-81cc-be2fb6e08993" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.100086 4693 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-hw9b4 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.100157 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" podUID="94c146b4-f621-42ff-b0db-5e471b8938b6" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.100167 4693 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-hw9b4 container/oauth-apiserver namespace/openshift-oauth-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.100238 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw9b4" podUID="94c146b4-f621-42ff-b0db-5e471b8938b6" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.23:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.131615 4693 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-w6x8t container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.131673 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" podUID="62fa5de9-a571-40e5-a32c-e1708a428f19" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.131719 4693 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-w6x8t container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.131792 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" podUID="62fa5de9-a571-40e5-a32c-e1708a428f19" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.214690 4693 patch_prober.go:28] interesting pod/logging-loki-distributor-76cc67bf56-fsnzk container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.214748 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" podUID="fee116a3-18ff-4755-b34f-82baa25eeefd" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.378576 4693 patch_prober.go:28] interesting pod/logging-loki-querier-5895d59bb8-gcwwx container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.378643 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" podUID="b89219b7-2b92-44d8-897c-beb5ef9d6861" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.378669 4693 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.378703 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="c8c936c1-e205-423b-960f-af3894b107df" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.378713 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl" podUID="955c500c-bfaa-463d-b207-fcf0bd9bd9f2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.459016 4693 patch_prober.go:28] interesting pod/logging-loki-query-frontend-84558f7c9f-ht5fw container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.459304 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" podUID="2691b993-4939-4cf2-84ab-1d34ea3dded9" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.522712 4693 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.77:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.523253 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="54ae60ed-431d-4995-a2f7-564738343760" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.77:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.641593 4693 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.641677 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.641727 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.643383 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-scheduler" containerStatusID={"Type":"cri-o","ID":"9141897abf18bfa9aa4d537e0e117efd7eeb1137e4f4eb0aeb4d68ed07430ff1"} pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" containerMessage="Container kube-scheduler failed liveness probe, will be restarted" Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.643479 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" containerID="cri-o://9141897abf18bfa9aa4d537e0e117efd7eeb1137e4f4eb0aeb4d68ed07430ff1" gracePeriod=30 Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.761586 4693 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.761679 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="da093088-1eba-4749-954b-c87347466fa9" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.949509 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="03515034-0f60-4e96-b2cc-9784f6e07887" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.161:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.949651 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.949991 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" podUID="26cbeab5-89ba-425a-87c0-8797ceb6957a" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.950169 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.951530 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" podUID="26cbeab5-89ba-425a-87c0-8797ceb6957a" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.952713 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" Dec 12 17:14:57 crc kubenswrapper[4693]: I1212 17:14:57.951437 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="03515034-0f60-4e96-b2cc-9784f6e07887" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.161:9090/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.033356 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="webhook-server" containerStatusID={"Type":"cri-o","ID":"970ce2fc0dbc6fcc4e0fa4562e5b5d2591a444a6407defefa1819ef84f3286b7"} pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" containerMessage="Container webhook-server failed liveness probe, will be restarted" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.033410 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" podUID="26cbeab5-89ba-425a-87c0-8797ceb6957a" containerName="webhook-server" containerID="cri-o://970ce2fc0dbc6fcc4e0fa4562e5b5d2591a444a6407defefa1819ef84f3286b7" gracePeriod=2 Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.033499 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.033520 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.033566 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.033711 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.033783 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.033835 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.034151 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"979d3ca0028d2fa82eb5aa011aa5c9b4c3540b7482648ad9eed0d95cac19d909"} pod="openshift-ingress/router-default-5444994796-hfmz9" containerMessage="Container router failed liveness probe, will be restarted" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.034172 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" containerID="cri-o://979d3ca0028d2fa82eb5aa011aa5c9b4c3540b7482648ad9eed0d95cac19d909" gracePeriod=10 Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.262553 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-8cb8d8774-9lkdf" podUID="1da03bb8-e1d0-4e14-9a78-c5bcca1a191f" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.99:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.262647 4693 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-wz942 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.262670 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" podUID="87e8f397-20cd-469f-924d-204ce1a8db47" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.262697 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.262743 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-operator-8cb8d8774-9lkdf" podUID="1da03bb8-e1d0-4e14-9a78-c5bcca1a191f" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.99:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.263447 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"a5668880ee0e6829c325ff86e69fe77011d67cc1a0cdee6b6a6c956ccabaefdc"} pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.263486 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" podUID="87e8f397-20cd-469f-924d-204ce1a8db47" containerName="authentication-operator" containerID="cri-o://a5668880ee0e6829c325ff86e69fe77011d67cc1a0cdee6b6a6c956ccabaefdc" gracePeriod=30 Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.389507 4693 patch_prober.go:28] interesting pod/downloads-7954f5f757-bz9v2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.31:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.389569 4693 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hhs2z container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.389970 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bz9v2" podUID="586d0874-ebe1-41db-b596-1dfed12b2b94" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.390022 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" podUID="38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.389678 4693 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hhs2z container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.390083 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.389714 4693 patch_prober.go:28] interesting pod/downloads-7954f5f757-bz9v2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.390127 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" podUID="38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.390180 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bz9v2" podUID="586d0874-ebe1-41db-b596-1dfed12b2b94" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.389756 4693 patch_prober.go:28] interesting pod/console-operator-58897d9998-wcl2w container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.389792 4693 patch_prober.go:28] interesting pod/console-operator-58897d9998-wcl2w container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.390289 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" podUID="f51bf74b-1d86-4a22-a355-f2c64a6516e5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.390300 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.390264 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" podUID="f51bf74b-1d86-4a22-a355-f2c64a6516e5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.390536 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.390571 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.396077 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"5dc332e8efc65c41ab73c49cd3f3bb76270165246a873cf2098c2facbefb5af5"} pod="openshift-console-operator/console-operator-58897d9998-wcl2w" containerMessage="Container console-operator failed liveness probe, will be restarted" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.396125 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" podUID="f51bf74b-1d86-4a22-a355-f2c64a6516e5" containerName="console-operator" containerID="cri-o://5dc332e8efc65c41ab73c49cd3f3bb76270165246a873cf2098c2facbefb5af5" gracePeriod=30 Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.405144 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="olm-operator" containerStatusID={"Type":"cri-o","ID":"704e5981a535815e3d0aa2b0ef343d5ed69f70b480077e26e33a1c4fa93c4793"} pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" containerMessage="Container olm-operator failed liveness probe, will be restarted" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.405201 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" podUID="38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95" containerName="olm-operator" containerID="cri-o://704e5981a535815e3d0aa2b0ef343d5ed69f70b480077e26e33a1c4fa93c4793" gracePeriod=30 Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.480491 4693 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-pqr7h container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.480566 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pqr7h" podUID="80dd1d93-b2bd-4fad-b199-aa072c2c8216" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.480808 4693 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-pqr7h container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.480873 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pqr7h" podUID="80dd1d93-b2bd-4fad-b199-aa072c2c8216" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.658706 4693 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r6qvl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.658807 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl" podUID="d3b86c37-5764-4b23-b927-ad4a77885456" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.658745 4693 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r6qvl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.658897 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r6qvl" podUID="d3b86c37-5764-4b23-b927-ad4a77885456" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.743397 4693 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-d86dw container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.743437 4693 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-d86dw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.743401 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" podUID="57409d4d-edf7-400c-9fcf-d6116ac22968" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.743464 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" podUID="61620225-2125-49da-94f6-f6ef9dd7e6ce" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.743510 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.743540 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.743441 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" podUID="61620225-2125-49da-94f6-f6ef9dd7e6ce" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.743801 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.744331 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"82d8ce9a09e27e3be2af59e50054920230529cb0173716e36fd4edafc1822555"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" containerMessage="Container packageserver failed liveness probe, will be restarted" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.744362 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" podUID="61620225-2125-49da-94f6-f6ef9dd7e6ce" containerName="packageserver" containerID="cri-o://82d8ce9a09e27e3be2af59e50054920230529cb0173716e36fd4edafc1822555" gracePeriod=30 Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.784411 4693 patch_prober.go:28] interesting pod/monitoring-plugin-78f748b45-xcpg8 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.784430 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" podUID="57409d4d-edf7-400c-9fcf-d6116ac22968" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.784493 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-78f748b45-xcpg8" podUID="f267d07b-2357-45ee-999d-94fad4f7bbce" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.79:9443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.791896 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-engine-6f88647487-x6nx4" podUID="9358ae82-e228-4d1e-8d68-8ff49a9bbdc1" containerName="heat-engine" probeResult="failure" output="command timed out" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.792024 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-bhqb5" podUID="60ada46e-eb41-4339-a653-610721982c81" containerName="registry-server" probeResult="failure" output="command timed out" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.793126 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-engine-6f88647487-x6nx4" podUID="9358ae82-e228-4d1e-8d68-8ff49a9bbdc1" containerName="heat-engine" probeResult="failure" output="command timed out" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.793740 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-bhqb5" podUID="60ada46e-eb41-4339-a653-610721982c81" containerName="registry-server" probeResult="failure" output="command timed out" Dec 12 17:14:58 crc kubenswrapper[4693]: I1212 17:14:58.995416 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" podUID="26cbeab5-89ba-425a-87c0-8797ceb6957a" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.047536 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-cdd8s" event={"ID":"8c0d2adb-6fec-4574-8733-b6e817a943e5","Type":"ContainerDied","Data":"81d658fedadc442c02a9a59bf8446a19442f00ede51a99032702807c85ff4d65"} Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.047867 4693 generic.go:334] "Generic (PLEG): container finished" podID="8c0d2adb-6fec-4574-8733-b6e817a943e5" containerID="81d658fedadc442c02a9a59bf8446a19442f00ede51a99032702807c85ff4d65" exitCode=1 Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.049339 4693 scope.go:117] "RemoveContainer" containerID="81d658fedadc442c02a9a59bf8446a19442f00ede51a99032702807c85ff4d65" Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.051484 4693 generic.go:334] "Generic (PLEG): container finished" podID="3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40" containerID="b715f8597df5123d0b0521b7ea55d22bfbb06bec445d205213530d7ca9177bce" exitCode=0 Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.051545 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40","Type":"ContainerDied","Data":"b715f8597df5123d0b0521b7ea55d22bfbb06bec445d205213530d7ca9177bce"} Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.077245 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.077348 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.218048 4693 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-7qqwq" podUID="f24e6c5f-a222-44d6-8c2a-75b0d066e218" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.300021 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" podUID="5a6d7731-5c1e-4b9b-b847-6deabf3f6af9" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.300096 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.300266 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-7qqwq" podUID="f24e6c5f-a222-44d6-8c2a-75b0d066e218" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.300454 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-7qqwq" Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.300556 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-7qqwq" podUID="f24e6c5f-a222-44d6-8c2a-75b0d066e218" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.300630 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-7qqwq" Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.301129 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" podUID="5a6d7731-5c1e-4b9b-b847-6deabf3f6af9" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.301177 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.305201 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr-k8s-webhook-server" containerStatusID={"Type":"cri-o","ID":"e1c262531dfecff5b7c6af1ae256cd13df8258d3b62f310fcb97d8e08ffe7370"} pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" containerMessage="Container frr-k8s-webhook-server failed liveness probe, will be restarted" Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.305263 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" podUID="5a6d7731-5c1e-4b9b-b847-6deabf3f6af9" containerName="frr-k8s-webhook-server" containerID="cri-o://e1c262531dfecff5b7c6af1ae256cd13df8258d3b62f310fcb97d8e08ffe7370" gracePeriod=10 Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.310169 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"020a118cda47c4339ba0ef9dd6d3d3b139bcef20bca409e55f2a7cd0885112e1"} pod="metallb-system/frr-k8s-7qqwq" containerMessage="Container controller failed liveness probe, will be restarted" Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.310713 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-7qqwq" podUID="f24e6c5f-a222-44d6-8c2a-75b0d066e218" containerName="controller" containerID="cri-o://020a118cda47c4339ba0ef9dd6d3d3b139bcef20bca409e55f2a7cd0885112e1" gracePeriod=2 Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.387567 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-5bddd4b946-jdmtk" podUID="a1424cfc-ad10-45e2-b69f-f313e29e5b58" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.95:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.388127 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-5bddd4b946-jdmtk" podUID="a1424cfc-ad10-45e2-b69f-f313e29e5b58" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.95:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.390440 4693 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hhs2z container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.390470 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" podUID="38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.392102 4693 patch_prober.go:28] interesting pod/console-operator-58897d9998-wcl2w container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.392139 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" podUID="f51bf74b-1d86-4a22-a355-f2c64a6516e5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:59 crc kubenswrapper[4693]: E1212 17:14:59.442398 4693 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c2e2ba8_1cb5_42c1_979b_9c48aefb7f40.slice/crio-conmon-b715f8597df5123d0b0521b7ea55d22bfbb06bec445d205213530d7ca9177bce.scope\": RecentStats: unable to find data in memory cache]" Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.744530 4693 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-d86dw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.744925 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" podUID="61620225-2125-49da-94f6-f6ef9dd7e6ce" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.785462 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" podUID="57409d4d-edf7-400c-9fcf-d6116ac22968" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.794094 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-x27t6" podUID="bb6e1c71-15d0-4078-837b-0d0d7c9e981f" containerName="registry-server" probeResult="failure" output="command timed out" Dec 12 17:14:59 crc kubenswrapper[4693]: I1212 17:14:59.794127 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-x27t6" podUID="bb6e1c71-15d0-4078-837b-0d0d7c9e981f" containerName="registry-server" probeResult="failure" output="command timed out" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.022451 4693 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-t9zpr container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.13:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.022507 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" podUID="51db3b1a-8b64-47d6-b09c-a8356e855606" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.13:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.022451 4693 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-t9zpr container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.13:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.022553 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-d8bb48f5d-t9zpr" podUID="51db3b1a-8b64-47d6-b09c-a8356e855606" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.13:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.049050 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" podUID="26cbeab5-89ba-425a-87c0-8797ceb6957a" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.067904 4693 patch_prober.go:28] interesting pod/thanos-querier-7f697d8f45-x28ts container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.067996 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-7f697d8f45-x28ts" podUID="3808e52c-3efa-4017-b799-bc195fd1d611" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.187558 4693 patch_prober.go:28] interesting pod/perses-operator-5446b9c989-clst5 container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.187596 4693 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-w6x8t container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.187563 4693 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-w6x8t container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.187679 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" podUID="62fa5de9-a571-40e5-a32c-e1708a428f19" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.187701 4693 patch_prober.go:28] interesting pod/perses-operator-5446b9c989-clst5 container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.27:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.187623 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5446b9c989-clst5" podUID="2dd59d35-d975-49a2-8a23-db068e921965" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.187645 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" podUID="62fa5de9-a571-40e5-a32c-e1708a428f19" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.187721 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5446b9c989-clst5" podUID="2dd59d35-d975-49a2-8a23-db068e921965" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.269449 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-ndkbb" podUID="a0fbfcb7-b516-452f-be80-ddd275ed0987" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.269523 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-ndkbb" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.269449 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-ndkbb" podUID="a0fbfcb7-b516-452f-be80-ddd275ed0987" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.269679 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-ndkbb" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.270737 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"231ef294eed60ec252f71a59ce58051eef244007adb124dd46f77c91963b90f9"} pod="metallb-system/speaker-ndkbb" containerMessage="Container speaker failed liveness probe, will be restarted" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.270831 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-ndkbb" podUID="a0fbfcb7-b516-452f-be80-ddd275ed0987" containerName="speaker" containerID="cri-o://231ef294eed60ec252f71a59ce58051eef244007adb124dd46f77c91963b90f9" gracePeriod=2 Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.351492 4693 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mglqp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.351545 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mglqp" podUID="fef1de87-a0ab-4a6e-9b37-d446cf2ec47e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.392525 4693 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mglqp container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.392576 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-7qqwq" podUID="f24e6c5f-a222-44d6-8c2a-75b0d066e218" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.392601 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-mglqp" podUID="fef1de87-a0ab-4a6e-9b37-d446cf2ec47e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.524587 4693 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-t2jb4 container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.64:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.524650 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" podUID="2132cae9-b4b7-48be-bb0a-482d215417af" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.64:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.524718 4693 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-t2jb4 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.64:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.524730 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-t2jb4" podUID="2132cae9-b4b7-48be-bb0a-482d215417af" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.64:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.731704 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-5655c58dd6-dbkxt" podUID="09a2f99b-1398-4e56-ac77-ae6e4d9aaac8" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.788677 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="73559c8b-d017-4a5d-aced-3da25d264b0a" containerName="galera" probeResult="failure" output="command timed out" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.790477 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="cebb8554-fc88-4ab5-b2d9-61495b3648f6" containerName="prometheus" probeResult="failure" output="command timed out" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.812440 4693 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.812504 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.822761 4693 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" start-of-body= Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.822824 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" Dec 12 17:15:00 crc kubenswrapper[4693]: I1212 17:15:00.950486 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="03515034-0f60-4e96-b2cc-9784f6e07887" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.161:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.081045 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-cdd8s" event={"ID":"8c0d2adb-6fec-4574-8733-b6e817a943e5","Type":"ContainerStarted","Data":"5f8c947412bfaee06eb92db18ea2fb334d360c72153a90d8f78758db09787e4b"} Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.081390 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-967d97867-cdd8s" Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.089731 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-wcl2w_f51bf74b-1d86-4a22-a355-f2c64a6516e5/console-operator/0.log" Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.089793 4693 generic.go:334] "Generic (PLEG): container finished" podID="f51bf74b-1d86-4a22-a355-f2c64a6516e5" containerID="5dc332e8efc65c41ab73c49cd3f3bb76270165246a873cf2098c2facbefb5af5" exitCode=1 Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.089865 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" event={"ID":"f51bf74b-1d86-4a22-a355-f2c64a6516e5","Type":"ContainerDied","Data":"5dc332e8efc65c41ab73c49cd3f3bb76270165246a873cf2098c2facbefb5af5"} Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.092628 4693 generic.go:334] "Generic (PLEG): container finished" podID="26cbeab5-89ba-425a-87c0-8797ceb6957a" containerID="970ce2fc0dbc6fcc4e0fa4562e5b5d2591a444a6407defefa1819ef84f3286b7" exitCode=137 Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.092685 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" event={"ID":"26cbeab5-89ba-425a-87c0-8797ceb6957a","Type":"ContainerDied","Data":"970ce2fc0dbc6fcc4e0fa4562e5b5d2591a444a6407defefa1819ef84f3286b7"} Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.311574 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-ndkbb" podUID="a0fbfcb7-b516-452f-be80-ddd275ed0987" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.361663 4693 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": read tcp 192.168.126.11:35660->192.168.126.11:10257: read: connection reset by peer" start-of-body= Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.361766 4693 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": EOF" start-of-body= Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.361818 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": EOF" Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.361734 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": read tcp 192.168.126.11:35660->192.168.126.11:10257: read: connection reset by peer" Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.405132 4693 patch_prober.go:28] interesting pod/nmstate-webhook-f8fb84555-4jrn4 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.87:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.405218 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-f8fb84555-4jrn4" podUID="18a4d414-e872-4bb3-ae29-166fcc455a9a" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.87:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.791094 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="cebb8554-fc88-4ab5-b2d9-61495b3648f6" containerName="prometheus" probeResult="failure" output="command timed out" Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.791912 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="ec693a73-a415-42a1-98f4-86438aa58d56" containerName="galera" probeResult="failure" output="command timed out" Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.914581 4693 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.914659 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.915480 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-jstfj container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.915513 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" podUID="752b64e1-40d2-47cd-a555-0e23495e2443" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.915944 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-jstfj container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.916101 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" podUID="752b64e1-40d2-47cd-a555-0e23495e2443" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.936152 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-6fjrq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.936226 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" podUID="34840dce-2cd9-4cf3-81cc-be2fb6e08993" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.937098 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-6fjrq container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:01 crc kubenswrapper[4693]: I1212 17:15:01.937163 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-6fjrq" podUID="34840dce-2cd9-4cf3-81cc-be2fb6e08993" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:02 crc kubenswrapper[4693]: I1212 17:15:02.113001 4693 generic.go:334] "Generic (PLEG): container finished" podID="cae4711d-6ae1-402f-9fc8-751998ed785d" containerID="11c8d1c7d9c661001bf7db151b3c68c56edce7a8688ad7a11c612875c0194270" exitCode=0 Dec 12 17:15:02 crc kubenswrapper[4693]: I1212 17:15:02.113888 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" event={"ID":"cae4711d-6ae1-402f-9fc8-751998ed785d","Type":"ContainerDied","Data":"11c8d1c7d9c661001bf7db151b3c68c56edce7a8688ad7a11c612875c0194270"} Dec 12 17:15:02 crc kubenswrapper[4693]: I1212 17:15:02.627265 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tdrqj" podUID="4fa52597-7870-4902-a274-6a4103c3630b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.100:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:02 crc kubenswrapper[4693]: I1212 17:15:02.627586 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tdrqj" podUID="4fa52597-7870-4902-a274-6a4103c3630b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:02 crc kubenswrapper[4693]: I1212 17:15:02.628460 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="cd076b50-0211-4876-b7df-b7140ebac121" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.2:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:02 crc kubenswrapper[4693]: I1212 17:15:02.628478 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="cd076b50-0211-4876-b7df-b7140ebac121" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.2:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:02 crc kubenswrapper[4693]: I1212 17:15:02.628609 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Dec 12 17:15:02 crc kubenswrapper[4693]: I1212 17:15:02.630057 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-state-metrics" containerStatusID={"Type":"cri-o","ID":"c048ca5dfdbb7ab3e02258928c08d02fced1533e48af292164ae3ca630bc8112"} pod="openstack/kube-state-metrics-0" containerMessage="Container kube-state-metrics failed liveness probe, will be restarted" Dec 12 17:15:02 crc kubenswrapper[4693]: I1212 17:15:02.630107 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="cd076b50-0211-4876-b7df-b7140ebac121" containerName="kube-state-metrics" containerID="cri-o://c048ca5dfdbb7ab3e02258928c08d02fced1533e48af292164ae3ca630bc8112" gracePeriod=30 Dec 12 17:15:02 crc kubenswrapper[4693]: I1212 17:15:02.715082 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4wp87" podUID="ac211615-b518-4011-be82-483cbb246d4b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:02 crc kubenswrapper[4693]: I1212 17:15:02.715254 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4wp87" Dec 12 17:15:02 crc kubenswrapper[4693]: I1212 17:15:02.716878 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4wp87" podUID="ac211615-b518-4011-be82-483cbb246d4b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:02 crc kubenswrapper[4693]: I1212 17:15:02.718708 4693 patch_prober.go:28] interesting pod/route-controller-manager-66fdbf566b-4w29d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Dec 12 17:15:02 crc kubenswrapper[4693]: I1212 17:15:02.718777 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" podUID="41b94683-51bf-4720-9160-36bd373d88ba" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Dec 12 17:15:02 crc kubenswrapper[4693]: I1212 17:15:02.909403 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 12 17:15:02 crc kubenswrapper[4693]: I1212 17:15:02.940563 4693 patch_prober.go:28] interesting pod/oauth-openshift-db548d47c-z22tr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:02 crc kubenswrapper[4693]: I1212 17:15:02.940631 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" podUID="70395fde-23f6-41b0-a04e-c4568b405e9d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:02 crc kubenswrapper[4693]: I1212 17:15:02.940710 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.022798 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nrfbt" podUID="fe8f2a92-e87a-40d4-b96b-0e0af6443656" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.023261 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-lz95h" podUID="267498f5-fa7b-44ec-bd94-361a261e8844" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.023375 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-lz95h" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.125803 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-lz95h" podUID="267498f5-fa7b-44ec-bd94-361a261e8844" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.125939 4693 patch_prober.go:28] interesting pod/loki-operator-controller-manager-5c67884d5c-jpl4r container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.51:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.125970 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" podUID="ac702b24-9bff-4198-a7f7-e368773fb8de" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.51:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.166483 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" event={"ID":"26cbeab5-89ba-425a-87c0-8797ceb6957a","Type":"ContainerStarted","Data":"dd25569e43d8b1dde9cdabfe17f44c0a6082b778e715fcffff2cca3583184322"} Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.166638 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.168652 4693 generic.go:334] "Generic (PLEG): container finished" podID="87e8f397-20cd-469f-924d-204ce1a8db47" containerID="a5668880ee0e6829c325ff86e69fe77011d67cc1a0cdee6b6a6c956ccabaefdc" exitCode=0 Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.168752 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" event={"ID":"87e8f397-20cd-469f-924d-204ce1a8db47","Type":"ContainerDied","Data":"a5668880ee0e6829c325ff86e69fe77011d67cc1a0cdee6b6a6c956ccabaefdc"} Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.171651 4693 patch_prober.go:28] interesting pod/controller-manager-6964955f74-9kcjr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.171712 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" podUID="4741216c-0a8d-4079-b459-cb459dc4f5b3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.171754 4693 patch_prober.go:28] interesting pod/oauth-openshift-db548d47c-z22tr container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.171767 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" podUID="70395fde-23f6-41b0-a04e-c4568b405e9d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.171786 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.172382 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nrfbt" podUID="fe8f2a92-e87a-40d4-b96b-0e0af6443656" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.172830 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4wp87" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.173204 4693 patch_prober.go:28] interesting pod/loki-operator-controller-manager-5c67884d5c-jpl4r container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.51:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.173252 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" podUID="ac702b24-9bff-4198-a7f7-e368773fb8de" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.51:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.173314 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.177467 4693 generic.go:334] "Generic (PLEG): container finished" podID="f24e6c5f-a222-44d6-8c2a-75b0d066e218" containerID="020a118cda47c4339ba0ef9dd6d3d3b139bcef20bca409e55f2a7cd0885112e1" exitCode=137 Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.177584 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qqwq" event={"ID":"f24e6c5f-a222-44d6-8c2a-75b0d066e218","Type":"ContainerDied","Data":"020a118cda47c4339ba0ef9dd6d3d3b139bcef20bca409e55f2a7cd0885112e1"} Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.181950 4693 generic.go:334] "Generic (PLEG): container finished" podID="cd076b50-0211-4876-b7df-b7140ebac121" containerID="c048ca5dfdbb7ab3e02258928c08d02fced1533e48af292164ae3ca630bc8112" exitCode=2 Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.182055 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cd076b50-0211-4876-b7df-b7140ebac121","Type":"ContainerDied","Data":"c048ca5dfdbb7ab3e02258928c08d02fced1533e48af292164ae3ca630bc8112"} Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.185971 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.203905 4693 status_manager.go:875] "Failed to update status for pod" pod="openshift-marketplace/community-operators-hsxzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0fa56db-49e1-4138-b485-aaaa14e7ebdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T17:14:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"reason\\\":\\\"PodCompleted\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T17:14:40Z\\\",\\\"reason\\\":\\\"PodCompleted\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T17:14:40Z\\\",\\\"reason\\\":\\\"PodCompleted\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c534982692b2a2a44f449c231e110caf743450f170d82e3eb5eacb8bfa04521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c534982692b2a2a44f449c231e110caf743450f170d82e3eb5eacb8bfa04521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T17:14:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T17:14:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/extracted-catalog\\\",\\\"name\\\":\\\"catalog-content\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bklrz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Succeeded\\\"}}\" for pod \"openshift-marketplace\"/\"community-operators-hsxzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=1s\": context deadline exceeded" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.210219 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.211246 4693 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="1adb73c07e53bc378d1cea7ef797f24c5f8be2a84d6833262c2329d35ba64820" exitCode=1 Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.215132 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"1adb73c07e53bc378d1cea7ef797f24c5f8be2a84d6833262c2329d35ba64820"} Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.215215 4693 scope.go:117] "RemoveContainer" containerID="20c83064785ad5afbaad29c72ca34b32572d75fc1f11dd97c3730a7b62dd32bc" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.219576 4693 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-w6x8t container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.219633 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" podUID="62fa5de9-a571-40e5-a32c-e1708a428f19" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.219695 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.222412 4693 scope.go:117] "RemoveContainer" containerID="1adb73c07e53bc378d1cea7ef797f24c5f8be2a84d6833262c2329d35ba64820" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.225947 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"f256f43349cf28f83fea42583eab372fb88e9e09f461bf9a43c98da70fedc314"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.226007 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" podUID="62fa5de9-a571-40e5-a32c-e1708a428f19" containerName="openshift-config-operator" containerID="cri-o://f256f43349cf28f83fea42583eab372fb88e9e09f461bf9a43c98da70fedc314" gracePeriod=30 Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.240366 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" event={"ID":"cae4711d-6ae1-402f-9fc8-751998ed785d","Type":"ContainerStarted","Data":"5f1da1eb695ab8586553b072666d65e67f1af67f1d284044099c76ca4401bc3d"} Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.241144 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.251066 4693 generic.go:334] "Generic (PLEG): container finished" podID="a0fbfcb7-b516-452f-be80-ddd275ed0987" containerID="231ef294eed60ec252f71a59ce58051eef244007adb124dd46f77c91963b90f9" exitCode=137 Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.251248 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ndkbb" event={"ID":"a0fbfcb7-b516-452f-be80-ddd275ed0987","Type":"ContainerDied","Data":"231ef294eed60ec252f71a59ce58051eef244007adb124dd46f77c91963b90f9"} Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.261615 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bkfnh" podUID="4c46ca75-8071-4f2a-bda0-44bf851365cb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.262046 4693 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-w6x8t container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.262092 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" podUID="62fa5de9-a571-40e5-a32c-e1708a428f19" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.262211 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.262460 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-lz95h" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.262658 4693 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-zwt44 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" start-of-body= Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.262713 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" podUID="cae4711d-6ae1-402f-9fc8-751998ed785d" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.272617 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-wcl2w_f51bf74b-1d86-4a22-a355-f2c64a6516e5/console-operator/0.log" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.273206 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" event={"ID":"f51bf74b-1d86-4a22-a355-f2c64a6516e5","Type":"ContainerStarted","Data":"b37fd34793fd182b6fc36c2d0af9dbb5363ceb7bb958372fd27e345d4479c56c"} Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.274015 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.274671 4693 patch_prober.go:28] interesting pod/console-operator-58897d9998-wcl2w container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/readyz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.274741 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" podUID="f51bf74b-1d86-4a22-a355-f2c64a6516e5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/readyz\": dial tcp 10.217.0.34:8443: connect: connection refused" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.280991 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-7qqwq" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.291827 4693 generic.go:334] "Generic (PLEG): container finished" podID="38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95" containerID="704e5981a535815e3d0aa2b0ef343d5ed69f70b480077e26e33a1c4fa93c4793" exitCode=0 Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.292460 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" event={"ID":"38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95","Type":"ContainerDied","Data":"704e5981a535815e3d0aa2b0ef343d5ed69f70b480077e26e33a1c4fa93c4793"} Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.312772 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.313721 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"1b9d77c1cbf5e73bd221f286b168fe2c4a8d6d3a0409c7e41383af29dd1358b6"} pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.367156 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-pt4rg" podUID="f73b5773-7bac-41ae-af91-0e504b5a234f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.368499 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-pt4rg" podUID="f73b5773-7bac-41ae-af91-0e504b5a234f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.519874 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.524079 4693 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-zwt44 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" start-of-body= Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.524130 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" podUID="cae4711d-6ae1-402f-9fc8-751998ed785d" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.524184 4693 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-zwt44 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" start-of-body= Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.524198 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" podUID="cae4711d-6ae1-402f-9fc8-751998ed785d" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.605417 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d" podUID="96adb3fc-0bd6-44a0-9a3a-3bae3aa3a30c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.605536 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d" podUID="96adb3fc-0bd6-44a0-9a3a-3bae3aa3a30c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.820701 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.821051 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-f2llf" podUID="19868aeb-2fda-43a6-8801-7d72c8465394" containerName="registry-server" probeResult="failure" output="command timed out" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.821079 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-index-f2llf" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.821564 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"72084ee966659b77cd3a66842b1c8f2883e459499e08f949184fd9e6751c96ea"} pod="openstack-operators/openstack-operator-index-f2llf" containerMessage="Container registry-server failed liveness probe, will be restarted" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.821594 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-f2llf" podUID="19868aeb-2fda-43a6-8801-7d72c8465394" containerName="registry-server" containerID="cri-o://72084ee966659b77cd3a66842b1c8f2883e459499e08f949184fd9e6751c96ea" gracePeriod=30 Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.821681 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-f2llf" podUID="19868aeb-2fda-43a6-8801-7d72c8465394" containerName="registry-server" probeResult="failure" output="command timed out" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.821718 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-f2llf" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.821776 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-wsr9h" podUID="847f97b7-84be-4d2a-a699-30ca49fd1023" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.821859 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-wsr9h" podUID="847f97b7-84be-4d2a-a699-30ca49fd1023" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.822312 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.987538 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" podUID="1a4a5a89-abec-4d7b-9df5-ddcc4643fca0" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.1.13:8000/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:03 crc kubenswrapper[4693]: I1212 17:15:03.987538 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-api-666b75576f-n2mqg" podUID="2530fdec-8001-43a2-a0dd-2735ef97ef57" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.1.12:8004/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.070807 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-5fb5b97f75-jgjtw" podUID="1a4a5a89-abec-4d7b-9df5-ddcc4643fca0" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.1.13:8000/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.071208 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-666b75576f-n2mqg" podUID="2530fdec-8001-43a2-a0dd-2735ef97ef57" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.1.12:8004/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.216649 4693 patch_prober.go:28] interesting pod/loki-operator-controller-manager-5c67884d5c-jpl4r container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.51:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.217093 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" podUID="ac702b24-9bff-4198-a7f7-e368773fb8de" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.51:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.231426 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.329544 4693 trace.go:236] Trace[513427831]: "Calculate volume metrics of registry-storage for pod openshift-image-registry/image-registry-66df7c8f76-t2jb4" (12-Dec-2025 17:15:02.139) (total time: 2188ms): Dec 12 17:15:04 crc kubenswrapper[4693]: Trace[513427831]: [2.188825192s] [2.188825192s] END Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.329639 4693 trace.go:236] Trace[1730748628]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (12-Dec-2025 17:14:57.060) (total time: 7269ms): Dec 12 17:15:04 crc kubenswrapper[4693]: Trace[1730748628]: [7.269465666s] [7.269465666s] END Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.356370 4693 generic.go:334] "Generic (PLEG): container finished" podID="61620225-2125-49da-94f6-f6ef9dd7e6ce" containerID="82d8ce9a09e27e3be2af59e50054920230529cb0173716e36fd4edafc1822555" exitCode=0 Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.356454 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" event={"ID":"61620225-2125-49da-94f6-f6ef9dd7e6ce","Type":"ContainerDied","Data":"82d8ce9a09e27e3be2af59e50054920230529cb0173716e36fd4edafc1822555"} Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.361341 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c2e2ba8-1cb5-42c1-979b-9c48aefb7f40","Type":"ContainerStarted","Data":"edf5707c695d8021be4e8587625cbee026f97324b9634412f474838f78e4da9e"} Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.367060 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" event={"ID":"38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95","Type":"ContainerStarted","Data":"0dbf56c8b9581233f65426788d375800686791341468ad6f47fc6dd75a16dd15"} Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.367300 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.367871 4693 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hhs2z container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.367934 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" podUID="38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.371044 4693 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="9141897abf18bfa9aa4d537e0e117efd7eeb1137e4f4eb0aeb4d68ed07430ff1" exitCode=0 Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.371132 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"9141897abf18bfa9aa4d537e0e117efd7eeb1137e4f4eb0aeb4d68ed07430ff1"} Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.375303 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.378570 4693 generic.go:334] "Generic (PLEG): container finished" podID="5a6d7731-5c1e-4b9b-b847-6deabf3f6af9" containerID="e1c262531dfecff5b7c6af1ae256cd13df8258d3b62f310fcb97d8e08ffe7370" exitCode=0 Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.378668 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" event={"ID":"5a6d7731-5c1e-4b9b-b847-6deabf3f6af9","Type":"ContainerDied","Data":"e1c262531dfecff5b7c6af1ae256cd13df8258d3b62f310fcb97d8e08ffe7370"} Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.388605 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qqwq" event={"ID":"f24e6c5f-a222-44d6-8c2a-75b0d066e218","Type":"ContainerStarted","Data":"4609d20f5acad6331984610fdd5452e507b0335c0a5476fc90c7869c3bc52606"} Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.388786 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-7qqwq" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.422767 4693 generic.go:334] "Generic (PLEG): container finished" podID="a41df83d-6bb2-4c49-a431-f5851036a44d" containerID="407944a20bacde363ae3e0b0970210522ae53361922a1667ebc7424a8abcc44c" exitCode=1 Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.422817 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9jcss" event={"ID":"a41df83d-6bb2-4c49-a431-f5851036a44d","Type":"ContainerDied","Data":"407944a20bacde363ae3e0b0970210522ae53361922a1667ebc7424a8abcc44c"} Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.425846 4693 scope.go:117] "RemoveContainer" containerID="407944a20bacde363ae3e0b0970210522ae53361922a1667ebc7424a8abcc44c" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.428004 4693 generic.go:334] "Generic (PLEG): container finished" podID="62fa5de9-a571-40e5-a32c-e1708a428f19" containerID="f256f43349cf28f83fea42583eab372fb88e9e09f461bf9a43c98da70fedc314" exitCode=0 Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.428094 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" event={"ID":"62fa5de9-a571-40e5-a32c-e1708a428f19","Type":"ContainerDied","Data":"f256f43349cf28f83fea42583eab372fb88e9e09f461bf9a43c98da70fedc314"} Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.430319 4693 patch_prober.go:28] interesting pod/console-65dd6d4bcb-h8fs2 container/console namespace/openshift-console: Liveness probe status=failure output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.430467 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/console-65dd6d4bcb-h8fs2" podUID="ac10c353-ed34-4f82-ad22-dc0065fbb96e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.430642 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.435094 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console" containerStatusID={"Type":"cri-o","ID":"df2c0ba9a10de9c6284da4af4cfdf8984f720e1a8f7248d0acad338c9985812e"} pod="openshift-console/console-65dd6d4bcb-h8fs2" containerMessage="Container console failed liveness probe, will be restarted" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.444045 4693 generic.go:334] "Generic (PLEG): container finished" podID="96adb3fc-0bd6-44a0-9a3a-3bae3aa3a30c" containerID="b3a8a7dfec531bbd1782fdace0f56de8b1459733c11e3220731632ffeb2c553f" exitCode=1 Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.444166 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d" event={"ID":"96adb3fc-0bd6-44a0-9a3a-3bae3aa3a30c","Type":"ContainerDied","Data":"b3a8a7dfec531bbd1782fdace0f56de8b1459733c11e3220731632ffeb2c553f"} Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.447248 4693 scope.go:117] "RemoveContainer" containerID="b3a8a7dfec531bbd1782fdace0f56de8b1459733c11e3220731632ffeb2c553f" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.451986 4693 generic.go:334] "Generic (PLEG): container finished" podID="41b94683-51bf-4720-9160-36bd373d88ba" containerID="05594e0e6948065007460b75a1cefd9b075372db15fd8c62321575e79d1e6ee9" exitCode=0 Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.452104 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" event={"ID":"41b94683-51bf-4720-9160-36bd373d88ba","Type":"ContainerDied","Data":"05594e0e6948065007460b75a1cefd9b075372db15fd8c62321575e79d1e6ee9"} Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.473945 4693 generic.go:334] "Generic (PLEG): container finished" podID="4741216c-0a8d-4079-b459-cb459dc4f5b3" containerID="596d34b0d0c5f7955cda287604606de90aad230ea92bd147f6351762f75fc764" exitCode=0 Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.476896 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" event={"ID":"4741216c-0a8d-4079-b459-cb459dc4f5b3","Type":"ContainerDied","Data":"596d34b0d0c5f7955cda287604606de90aad230ea92bd147f6351762f75fc764"} Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.476940 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.478544 4693 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-zwt44 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" start-of-body= Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.478580 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" podUID="cae4711d-6ae1-402f-9fc8-751998ed785d" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.479241 4693 patch_prober.go:28] interesting pod/console-operator-58897d9998-wcl2w container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/readyz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.479327 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" podUID="f51bf74b-1d86-4a22-a355-f2c64a6516e5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/readyz\": dial tcp 10.217.0.34:8443: connect: connection refused" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.483453 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fmkmwl" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.611591 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-bhqb5" podUID="60ada46e-eb41-4339-a653-610721982c81" containerName="registry-server" probeResult="failure" output=< Dec 12 17:15:04 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:15:04 crc kubenswrapper[4693]: > Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.611702 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/redhat-operators-bhqb5" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.614034 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"3e36f0f008f0523f9232b639bd37f432d37978155b96dc75dbe5bc3c72d9857b"} pod="openshift-marketplace/redhat-operators-bhqb5" containerMessage="Container registry-server failed liveness probe, will be restarted" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.614091 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bhqb5" podUID="60ada46e-eb41-4339-a653-610721982c81" containerName="registry-server" containerID="cri-o://3e36f0f008f0523f9232b639bd37f432d37978155b96dc75dbe5bc3c72d9857b" gracePeriod=30 Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.618428 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-sbjt9" podUID="cabce219-6d9f-4aac-9402-ecf80e930f68" containerName="registry-server" probeResult="failure" output=< Dec 12 17:15:04 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:15:04 crc kubenswrapper[4693]: > Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.618529 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sbjt9" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.619471 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-sbjt9" podUID="cabce219-6d9f-4aac-9402-ecf80e930f68" containerName="registry-server" probeResult="failure" output=< Dec 12 17:15:04 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:15:04 crc kubenswrapper[4693]: > Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.619553 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-q5jh6" podUID="2dabbc2a-64f3-4f25-8b65-17ed75c51801" containerName="registry-server" probeResult="failure" output=< Dec 12 17:15:04 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:15:04 crc kubenswrapper[4693]: > Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.619638 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-q5jh6" podUID="2dabbc2a-64f3-4f25-8b65-17ed75c51801" containerName="registry-server" probeResult="failure" output=< Dec 12 17:15:04 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:15:04 crc kubenswrapper[4693]: > Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.621433 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sbjt9" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.625594 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/certified-operators-q5jh6" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.625764 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q5jh6" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.627768 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"0325aa781ea934ee1b75d71ea522211958ede90495dbe62a8af90e04c0c7479f"} pod="openshift-marketplace/redhat-marketplace-sbjt9" containerMessage="Container registry-server failed liveness probe, will be restarted" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.630480 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sbjt9" podUID="cabce219-6d9f-4aac-9402-ecf80e930f68" containerName="registry-server" containerID="cri-o://0325aa781ea934ee1b75d71ea522211958ede90495dbe62a8af90e04c0c7479f" gracePeriod=30 Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.628699 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"baac4e20b8610ebe1b070a44df663c53a762a529e2d9e71ba3d832a73ae486e6"} pod="openshift-marketplace/certified-operators-q5jh6" containerMessage="Container registry-server failed liveness probe, will be restarted" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.630592 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q5jh6" podUID="2dabbc2a-64f3-4f25-8b65-17ed75c51801" containerName="registry-server" containerID="cri-o://baac4e20b8610ebe1b070a44df663c53a762a529e2d9e71ba3d832a73ae486e6" gracePeriod=30 Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.633022 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-bhqb5" podUID="60ada46e-eb41-4339-a653-610721982c81" containerName="registry-server" probeResult="failure" output=< Dec 12 17:15:04 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:15:04 crc kubenswrapper[4693]: > Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.633136 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bhqb5" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.797848 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-565558f958-fnjh4" Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.943633 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hsxzq"] Dec 12 17:15:04 crc kubenswrapper[4693]: I1212 17:15:04.952544 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hsxzq"] Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.128045 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="ec693a73-a415-42a1-98f4-86438aa58d56" containerName="galera" containerID="cri-o://ee2909168c683d47031dfb2427036a864486334bf9662c1d0420c4430180999b" gracePeriod=17 Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.130945 4693 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-w6x8t container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.130993 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" podUID="62fa5de9-a571-40e5-a32c-e1708a428f19" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.183209 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="73559c8b-d017-4a5d-aced-3da25d264b0a" containerName="galera" containerID="cri-o://ecf679915acb4c8bd632299eb618fc9ab601608b029a46cdfa94cef79871b750" gracePeriod=15 Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.336985 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-q5jh6" podUID="2dabbc2a-64f3-4f25-8b65-17ed75c51801" containerName="registry-server" probeResult="failure" output="" Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.402892 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0fa56db-49e1-4138-b485-aaaa14e7ebdd" path="/var/lib/kubelet/pods/f0fa56db-49e1-4138-b485-aaaa14e7ebdd/volumes" Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.516482 4693 generic.go:334] "Generic (PLEG): container finished" podID="fe8f2a92-e87a-40d4-b96b-0e0af6443656" containerID="296f286281a7411ee64fcf0c6d8f9c94e74bff3a98bb05dece96c22a0b72ef14" exitCode=1 Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.516558 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nrfbt" event={"ID":"fe8f2a92-e87a-40d4-b96b-0e0af6443656","Type":"ContainerDied","Data":"296f286281a7411ee64fcf0c6d8f9c94e74bff3a98bb05dece96c22a0b72ef14"} Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.517980 4693 scope.go:117] "RemoveContainer" containerID="296f286281a7411ee64fcf0c6d8f9c94e74bff3a98bb05dece96c22a0b72ef14" Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.533178 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wz942" event={"ID":"87e8f397-20cd-469f-924d-204ce1a8db47","Type":"ContainerStarted","Data":"b16819005209b115bcb8ecc0f6f9be96923ce95be8fd1e15eba032c1b8d6ec11"} Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.548629 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cd076b50-0211-4876-b7df-b7140ebac121","Type":"ContainerStarted","Data":"d058f6d67be28417e0c2203d4ac437973e0be6df358758272b4b7afa622c4caa"} Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.548754 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.560413 4693 generic.go:334] "Generic (PLEG): container finished" podID="955c500c-bfaa-463d-b207-fcf0bd9bd9f2" containerID="fff1c22407f0b3d131b2d6713159653190cbfe07a582c046558bce4b498545d8" exitCode=1 Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.560761 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl" event={"ID":"955c500c-bfaa-463d-b207-fcf0bd9bd9f2","Type":"ContainerDied","Data":"fff1c22407f0b3d131b2d6713159653190cbfe07a582c046558bce4b498545d8"} Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.564261 4693 scope.go:117] "RemoveContainer" containerID="fff1c22407f0b3d131b2d6713159653190cbfe07a582c046558bce4b498545d8" Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.565931 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" event={"ID":"62fa5de9-a571-40e5-a32c-e1708a428f19","Type":"ContainerStarted","Data":"1c7bfdb3b24b045731743ab3b2d8a413da2603ad6e9766df0b823a07f6cf90c4"} Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.566579 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.570373 4693 generic.go:334] "Generic (PLEG): container finished" podID="19868aeb-2fda-43a6-8801-7d72c8465394" containerID="72084ee966659b77cd3a66842b1c8f2883e459499e08f949184fd9e6751c96ea" exitCode=0 Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.570424 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f2llf" event={"ID":"19868aeb-2fda-43a6-8801-7d72c8465394","Type":"ContainerDied","Data":"72084ee966659b77cd3a66842b1c8f2883e459499e08f949184fd9e6751c96ea"} Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.573398 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"57c96315570fc387dc3efb74e723cc6e674852a7511fe921c20eebce73a50eb8"} Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.574369 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.582013 4693 generic.go:334] "Generic (PLEG): container finished" podID="2dabbc2a-64f3-4f25-8b65-17ed75c51801" containerID="baac4e20b8610ebe1b070a44df663c53a762a529e2d9e71ba3d832a73ae486e6" exitCode=0 Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.582202 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5jh6" event={"ID":"2dabbc2a-64f3-4f25-8b65-17ed75c51801","Type":"ContainerDied","Data":"baac4e20b8610ebe1b070a44df663c53a762a529e2d9e71ba3d832a73ae486e6"} Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.597339 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" event={"ID":"4741216c-0a8d-4079-b459-cb459dc4f5b3","Type":"ContainerStarted","Data":"5dfc7e851b2d96d2a3ae58f40194999c0d1d31ac079a535a9c6b5edc2c298cc5"} Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.598415 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.598458 4693 patch_prober.go:28] interesting pod/controller-manager-6964955f74-9kcjr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.598512 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" podUID="4741216c-0a8d-4079-b459-cb459dc4f5b3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.607205 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ndkbb" event={"ID":"a0fbfcb7-b516-452f-be80-ddd275ed0987","Type":"ContainerStarted","Data":"3f64179db7b767dce9b93d8ba240ea71c7258f25896a4e3c28df8967d26b2c17"} Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.610712 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-ndkbb" Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.622922 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" event={"ID":"5a6d7731-5c1e-4b9b-b847-6deabf3f6af9","Type":"ContainerStarted","Data":"b5e34100826e0e5e96ae2386974736f82e66d733062602628e4ab00da1133dba"} Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.625832 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.631135 4693 generic.go:334] "Generic (PLEG): container finished" podID="f56863f1-3f85-4c6f-a2a6-81f0ee3b6317" containerID="2f94af0f17cefe50ab0aba345a03a2b51e7e81ba41c7151b1cc4d151e686966d" exitCode=1 Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.631481 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-q7mdw" event={"ID":"f56863f1-3f85-4c6f-a2a6-81f0ee3b6317","Type":"ContainerDied","Data":"2f94af0f17cefe50ab0aba345a03a2b51e7e81ba41c7151b1cc4d151e686966d"} Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.635474 4693 scope.go:117] "RemoveContainer" containerID="2f94af0f17cefe50ab0aba345a03a2b51e7e81ba41c7151b1cc4d151e686966d" Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.649890 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" event={"ID":"41b94683-51bf-4720-9160-36bd373d88ba","Type":"ContainerStarted","Data":"ca4a6073444389fcf3b768f7f0752aae9d87be94f897ca674b46399bd2d8b9fd"} Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.650533 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.650710 4693 patch_prober.go:28] interesting pod/route-controller-manager-66fdbf566b-4w29d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.650760 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" podUID="41b94683-51bf-4720-9160-36bd373d88ba" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.665505 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.667911 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bd4c6981e10c7955a0d31336fbcb78800f626a4bffb8b076bac18dd4cc3398c3"} Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.674944 4693 generic.go:334] "Generic (PLEG): container finished" podID="cabce219-6d9f-4aac-9402-ecf80e930f68" containerID="0325aa781ea934ee1b75d71ea522211958ede90495dbe62a8af90e04c0c7479f" exitCode=0 Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.675048 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbjt9" event={"ID":"cabce219-6d9f-4aac-9402-ecf80e930f68","Type":"ContainerDied","Data":"0325aa781ea934ee1b75d71ea522211958ede90495dbe62a8af90e04c0c7479f"} Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.678841 4693 generic.go:334] "Generic (PLEG): container finished" podID="5616685d-71d7-49b9-8c1b-6eccc11a74a1" containerID="8806d148894154b8e08ea7ca80d695a5b54c3bb76348728810c0c0f3d36a84f2" exitCode=1 Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.679187 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" event={"ID":"5616685d-71d7-49b9-8c1b-6eccc11a74a1","Type":"ContainerDied","Data":"8806d148894154b8e08ea7ca80d695a5b54c3bb76348728810c0c0f3d36a84f2"} Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.682973 4693 scope.go:117] "RemoveContainer" containerID="8806d148894154b8e08ea7ca80d695a5b54c3bb76348728810c0c0f3d36a84f2" Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.690171 4693 generic.go:334] "Generic (PLEG): container finished" podID="60ada46e-eb41-4339-a653-610721982c81" containerID="3e36f0f008f0523f9232b639bd37f432d37978155b96dc75dbe5bc3c72d9857b" exitCode=0 Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.697532 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhqb5" event={"ID":"60ada46e-eb41-4339-a653-610721982c81","Type":"ContainerDied","Data":"3e36f0f008f0523f9232b639bd37f432d37978155b96dc75dbe5bc3c72d9857b"} Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.699391 4693 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hhs2z container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.699436 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" podUID="38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.989951 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-5895d59bb8-gcwwx" Dec 12 17:15:05 crc kubenswrapper[4693]: I1212 17:15:05.998940 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-ht5fw" Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.000017 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-76cc67bf56-fsnzk" Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.328853 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl" Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.715128 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nrfbt" event={"ID":"fe8f2a92-e87a-40d4-b96b-0e0af6443656","Type":"ContainerStarted","Data":"a0c74744e7aadcd40cd0611033e86563998c27b1791443184332d2d21df1200e"} Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.715763 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nrfbt" Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.718439 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5jh6" event={"ID":"2dabbc2a-64f3-4f25-8b65-17ed75c51801","Type":"ContainerStarted","Data":"af23249e15278cf97d711a046828d7e08faaa2f6a041daf4ba79b5737f07f03f"} Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.721879 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbjt9" event={"ID":"cabce219-6d9f-4aac-9402-ecf80e930f68","Type":"ContainerStarted","Data":"b2fefd20c1ce607ef30f217b47bcf0dfab1575c1b4d0a484c40d80265ca7510b"} Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.725262 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhqb5" event={"ID":"60ada46e-eb41-4339-a653-610721982c81","Type":"ContainerStarted","Data":"d99ee5d7ed2083b3b26673c9fd123575860f8651233c63f3c81c7d5b6852eaeb"} Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.728533 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-q7mdw" event={"ID":"f56863f1-3f85-4c6f-a2a6-81f0ee3b6317","Type":"ContainerStarted","Data":"de65fced4e79a38f8b10da4bab252b448c07ee12e1635be83e5ea3460a107ee9"} Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.728908 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-q7mdw" Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.731950 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl" event={"ID":"955c500c-bfaa-463d-b207-fcf0bd9bd9f2","Type":"ContainerStarted","Data":"1c93ad3a88a3dee2d4135fbefd1327f91f6945825f4d31fb1b37e2c7ae01fe4f"} Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.732089 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl" Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.739186 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f2llf" event={"ID":"19868aeb-2fda-43a6-8801-7d72c8465394","Type":"ContainerStarted","Data":"0bc9b9b78df67c377fb6f58859e40f562394dc89b9e5574f009e8a32dcb14f1f"} Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.743334 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" event={"ID":"5616685d-71d7-49b9-8c1b-6eccc11a74a1","Type":"ContainerStarted","Data":"ab21a0210f7c2f882c8724c8d96bdbd74564f124b06ce1f40feacc8d0d5b3884"} Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.743537 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.747247 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" event={"ID":"61620225-2125-49da-94f6-f6ef9dd7e6ce","Type":"ContainerStarted","Data":"3031775f7ca307a90669249c8712709d883b7b0abbc1441b38ac4051d570ce0c"} Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.747544 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.747928 4693 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-d86dw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.748075 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" podUID="61620225-2125-49da-94f6-f6ef9dd7e6ce" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.757378 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9jcss" event={"ID":"a41df83d-6bb2-4c49-a431-f5851036a44d","Type":"ContainerStarted","Data":"0e33ce91b5e33c39478bcb29499d10965f866d5a254e77f8b845741d4316bde2"} Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.757917 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9jcss" Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.761343 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d" event={"ID":"96adb3fc-0bd6-44a0-9a3a-3bae3aa3a30c","Type":"ContainerStarted","Data":"5bd4e777a3dec3bc7b03c93e8d0c5df26f5ba03722af53b572dc82b3f779d70a"} Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.764462 4693 status_manager.go:317] "Container readiness changed for unknown container" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d" containerID="cri-o://b3a8a7dfec531bbd1782fdace0f56de8b1459733c11e3220731632ffeb2c553f" Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.764489 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d" Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.765553 4693 patch_prober.go:28] interesting pod/route-controller-manager-66fdbf566b-4w29d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.765595 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" podUID="41b94683-51bf-4720-9160-36bd373d88ba" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.765670 4693 patch_prober.go:28] interesting pod/controller-manager-6964955f74-9kcjr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.765713 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" podUID="4741216c-0a8d-4079-b459-cb459dc4f5b3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.954856 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-http ok Dec 12 17:15:06 crc kubenswrapper[4693]: [+]has-synced ok Dec 12 17:15:06 crc kubenswrapper[4693]: [-]process-running failed: reason withheld Dec 12 17:15:06 crc kubenswrapper[4693]: healthz check failed Dec 12 17:15:06 crc kubenswrapper[4693]: I1212 17:15:06.954927 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 17:15:07 crc kubenswrapper[4693]: I1212 17:15:07.320452 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="f98101ce-5311-42f6-951c-e0b8dd94641b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 17:15:07 crc kubenswrapper[4693]: I1212 17:15:07.336395 4693 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hhs2z container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Dec 12 17:15:07 crc kubenswrapper[4693]: I1212 17:15:07.336498 4693 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hhs2z container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Dec 12 17:15:07 crc kubenswrapper[4693]: I1212 17:15:07.336495 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" podUID="38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Dec 12 17:15:07 crc kubenswrapper[4693]: I1212 17:15:07.336660 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" podUID="38f2d4b7-8df3-47ff-9c8e-67a45d3e0a95" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Dec 12 17:15:07 crc kubenswrapper[4693]: I1212 17:15:07.349539 4693 patch_prober.go:28] interesting pod/console-operator-58897d9998-wcl2w container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Dec 12 17:15:07 crc kubenswrapper[4693]: I1212 17:15:07.349593 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" podUID="f51bf74b-1d86-4a22-a355-f2c64a6516e5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Dec 12 17:15:07 crc kubenswrapper[4693]: I1212 17:15:07.349606 4693 patch_prober.go:28] interesting pod/console-operator-58897d9998-wcl2w container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/readyz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Dec 12 17:15:07 crc kubenswrapper[4693]: I1212 17:15:07.349673 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" podUID="f51bf74b-1d86-4a22-a355-f2c64a6516e5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/readyz\": dial tcp 10.217.0.34:8443: connect: connection refused" Dec 12 17:15:07 crc kubenswrapper[4693]: I1212 17:15:07.503578 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-f2llf" Dec 12 17:15:07 crc kubenswrapper[4693]: I1212 17:15:07.503645 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-f2llf" Dec 12 17:15:07 crc kubenswrapper[4693]: I1212 17:15:07.704981 4693 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-d86dw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Dec 12 17:15:07 crc kubenswrapper[4693]: I1212 17:15:07.705032 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" podUID="61620225-2125-49da-94f6-f6ef9dd7e6ce" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Dec 12 17:15:07 crc kubenswrapper[4693]: I1212 17:15:07.705178 4693 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-d86dw container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Dec 12 17:15:07 crc kubenswrapper[4693]: I1212 17:15:07.705198 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" podUID="61620225-2125-49da-94f6-f6ef9dd7e6ce" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Dec 12 17:15:07 crc kubenswrapper[4693]: I1212 17:15:07.714760 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-vbmgp" Dec 12 17:15:07 crc kubenswrapper[4693]: I1212 17:15:07.771813 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d" Dec 12 17:15:07 crc kubenswrapper[4693]: I1212 17:15:07.772444 4693 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-d86dw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Dec 12 17:15:07 crc kubenswrapper[4693]: I1212 17:15:07.772481 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" podUID="61620225-2125-49da-94f6-f6ef9dd7e6ce" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Dec 12 17:15:07 crc kubenswrapper[4693]: I1212 17:15:07.900869 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-78f748b45-xcpg8" Dec 12 17:15:08 crc kubenswrapper[4693]: I1212 17:15:08.130517 4693 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-w6x8t container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 12 17:15:08 crc kubenswrapper[4693]: I1212 17:15:08.130966 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" podUID="62fa5de9-a571-40e5-a32c-e1708a428f19" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 12 17:15:08 crc kubenswrapper[4693]: I1212 17:15:08.130894 4693 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-w6x8t container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 12 17:15:08 crc kubenswrapper[4693]: I1212 17:15:08.131030 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" podUID="62fa5de9-a571-40e5-a32c-e1708a428f19" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 12 17:15:08 crc kubenswrapper[4693]: I1212 17:15:08.357611 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:15:08 crc kubenswrapper[4693]: E1212 17:15:08.357968 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:15:08 crc kubenswrapper[4693]: I1212 17:15:08.600967 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="f98101ce-5311-42f6-951c-e0b8dd94641b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 17:15:08 crc kubenswrapper[4693]: I1212 17:15:08.790442 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-hfmz9_49c82763-4d39-4424-8aa0-745158bd96c6/router/0.log" Dec 12 17:15:08 crc kubenswrapper[4693]: I1212 17:15:08.790506 4693 generic.go:334] "Generic (PLEG): container finished" podID="49c82763-4d39-4424-8aa0-745158bd96c6" containerID="979d3ca0028d2fa82eb5aa011aa5c9b4c3540b7482648ad9eed0d95cac19d909" exitCode=137 Dec 12 17:15:08 crc kubenswrapper[4693]: I1212 17:15:08.790712 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hfmz9" event={"ID":"49c82763-4d39-4424-8aa0-745158bd96c6","Type":"ContainerDied","Data":"979d3ca0028d2fa82eb5aa011aa5c9b4c3540b7482648ad9eed0d95cac19d909"} Dec 12 17:15:08 crc kubenswrapper[4693]: E1212 17:15:08.835767 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ecf679915acb4c8bd632299eb618fc9ab601608b029a46cdfa94cef79871b750" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 12 17:15:08 crc kubenswrapper[4693]: E1212 17:15:08.837906 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ecf679915acb4c8bd632299eb618fc9ab601608b029a46cdfa94cef79871b750" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 12 17:15:08 crc kubenswrapper[4693]: E1212 17:15:08.839057 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ecf679915acb4c8bd632299eb618fc9ab601608b029a46cdfa94cef79871b750" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 12 17:15:08 crc kubenswrapper[4693]: E1212 17:15:08.839121 4693 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="73559c8b-d017-4a5d-aced-3da25d264b0a" containerName="galera" Dec 12 17:15:09 crc kubenswrapper[4693]: I1212 17:15:09.397460 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack-operators/openstack-operator-index-f2llf" podUID="19868aeb-2fda-43a6-8801-7d72c8465394" containerName="registry-server" probeResult="failure" output=< Dec 12 17:15:09 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:15:09 crc kubenswrapper[4693]: > Dec 12 17:15:09 crc kubenswrapper[4693]: I1212 17:15:09.843161 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-hfmz9_49c82763-4d39-4424-8aa0-745158bd96c6/router/0.log" Dec 12 17:15:09 crc kubenswrapper[4693]: I1212 17:15:09.843521 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hfmz9" event={"ID":"49c82763-4d39-4424-8aa0-745158bd96c6","Type":"ContainerStarted","Data":"516c318dea2b3b71ebd608a478607a1ee604126614b7a4ba0ff36253fda3cd81"} Dec 12 17:15:09 crc kubenswrapper[4693]: I1212 17:15:09.847654 4693 generic.go:334] "Generic (PLEG): container finished" podID="ec693a73-a415-42a1-98f4-86438aa58d56" containerID="ee2909168c683d47031dfb2427036a864486334bf9662c1d0420c4430180999b" exitCode=0 Dec 12 17:15:09 crc kubenswrapper[4693]: I1212 17:15:09.847683 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ec693a73-a415-42a1-98f4-86438aa58d56","Type":"ContainerDied","Data":"ee2909168c683d47031dfb2427036a864486334bf9662c1d0420c4430180999b"} Dec 12 17:15:09 crc kubenswrapper[4693]: I1212 17:15:09.954627 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 17:15:09 crc kubenswrapper[4693]: I1212 17:15:09.959694 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 12 17:15:09 crc kubenswrapper[4693]: I1212 17:15:09.959754 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 12 17:15:10 crc kubenswrapper[4693]: I1212 17:15:10.363432 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 17:15:10 crc kubenswrapper[4693]: I1212 17:15:10.364593 4693 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 12 17:15:10 crc kubenswrapper[4693]: I1212 17:15:10.364655 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 12 17:15:10 crc kubenswrapper[4693]: E1212 17:15:10.429262 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2909168c683d47031dfb2427036a864486334bf9662c1d0420c4430180999b is running failed: container process not found" containerID="ee2909168c683d47031dfb2427036a864486334bf9662c1d0420c4430180999b" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 12 17:15:10 crc kubenswrapper[4693]: E1212 17:15:10.429899 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2909168c683d47031dfb2427036a864486334bf9662c1d0420c4430180999b is running failed: container process not found" containerID="ee2909168c683d47031dfb2427036a864486334bf9662c1d0420c4430180999b" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 12 17:15:10 crc kubenswrapper[4693]: E1212 17:15:10.430234 4693 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2909168c683d47031dfb2427036a864486334bf9662c1d0420c4430180999b is running failed: container process not found" containerID="ee2909168c683d47031dfb2427036a864486334bf9662c1d0420c4430180999b" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 12 17:15:10 crc kubenswrapper[4693]: E1212 17:15:10.430288 4693 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2909168c683d47031dfb2427036a864486334bf9662c1d0420c4430180999b is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="ec693a73-a415-42a1-98f4-86438aa58d56" containerName="galera" Dec 12 17:15:10 crc kubenswrapper[4693]: I1212 17:15:10.874943 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ec693a73-a415-42a1-98f4-86438aa58d56","Type":"ContainerStarted","Data":"0dc284da7495ee902775631a699be7d3b581c61004077ee0c70c287c32d689a9"} Dec 12 17:15:10 crc kubenswrapper[4693]: I1212 17:15:10.992075 4693 patch_prober.go:28] interesting pod/router-default-5444994796-hfmz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 17:15:10 crc kubenswrapper[4693]: [+]has-synced ok Dec 12 17:15:10 crc kubenswrapper[4693]: [+]process-running ok Dec 12 17:15:10 crc kubenswrapper[4693]: healthz check failed Dec 12 17:15:10 crc kubenswrapper[4693]: I1212 17:15:10.992125 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hfmz9" podUID="49c82763-4d39-4424-8aa0-745158bd96c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 17:15:11 crc kubenswrapper[4693]: I1212 17:15:11.104778 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q5jh6" Dec 12 17:15:11 crc kubenswrapper[4693]: I1212 17:15:11.104825 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q5jh6" Dec 12 17:15:11 crc kubenswrapper[4693]: I1212 17:15:11.745521 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-q7mdw" Dec 12 17:15:11 crc kubenswrapper[4693]: I1212 17:15:11.794722 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-nrfbt" Dec 12 17:15:11 crc kubenswrapper[4693]: I1212 17:15:11.821502 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="f98101ce-5311-42f6-951c-e0b8dd94641b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 17:15:11 crc kubenswrapper[4693]: I1212 17:15:11.821603 4693 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 12 17:15:11 crc kubenswrapper[4693]: I1212 17:15:11.822712 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"72b0b942a93ffffdbb799c3a665c85b2551dc03de28c51ff400e54dae2f01eaa"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Dec 12 17:15:11 crc kubenswrapper[4693]: I1212 17:15:11.822768 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f98101ce-5311-42f6-951c-e0b8dd94641b" containerName="cinder-scheduler" containerID="cri-o://72b0b942a93ffffdbb799c3a665c85b2551dc03de28c51ff400e54dae2f01eaa" gracePeriod=30 Dec 12 17:15:11 crc kubenswrapper[4693]: I1212 17:15:11.867099 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5c67884d5c-jpl4r" Dec 12 17:15:11 crc kubenswrapper[4693]: I1212 17:15:11.870842 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sbjt9" Dec 12 17:15:11 crc kubenswrapper[4693]: I1212 17:15:11.870885 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sbjt9" Dec 12 17:15:11 crc kubenswrapper[4693]: I1212 17:15:11.899303 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6964955f74-9kcjr" Dec 12 17:15:11 crc kubenswrapper[4693]: I1212 17:15:11.919366 4693 patch_prober.go:28] interesting pod/logging-loki-gateway-5665b75b44-jstfj container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:11 crc kubenswrapper[4693]: I1212 17:15:11.919446 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5665b75b44-jstfj" podUID="752b64e1-40d2-47cd-a555-0e23495e2443" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:11 crc kubenswrapper[4693]: I1212 17:15:11.955828 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 17:15:12 crc kubenswrapper[4693]: I1212 17:15:12.130449 4693 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-w6x8t container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:12 crc kubenswrapper[4693]: I1212 17:15:12.130853 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" podUID="62fa5de9-a571-40e5-a32c-e1708a428f19" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:12 crc kubenswrapper[4693]: I1212 17:15:12.130981 4693 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-w6x8t container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 17:15:12 crc kubenswrapper[4693]: I1212 17:15:12.131038 4693 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" podUID="62fa5de9-a571-40e5-a32c-e1708a428f19" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 17:15:12 crc kubenswrapper[4693]: I1212 17:15:12.147885 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-967d97867-cdd8s" Dec 12 17:15:12 crc kubenswrapper[4693]: I1212 17:15:12.463956 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9jcss" Dec 12 17:15:12 crc kubenswrapper[4693]: I1212 17:15:12.484739 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-q5jh6" podUID="2dabbc2a-64f3-4f25-8b65-17ed75c51801" containerName="registry-server" probeResult="failure" output=< Dec 12 17:15:12 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:15:12 crc kubenswrapper[4693]: > Dec 12 17:15:12 crc kubenswrapper[4693]: I1212 17:15:12.543551 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-v852d" Dec 12 17:15:12 crc kubenswrapper[4693]: I1212 17:15:12.690559 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-66fdbf566b-4w29d" Dec 12 17:15:12 crc kubenswrapper[4693]: I1212 17:15:12.788932 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-b86kc" Dec 12 17:15:12 crc kubenswrapper[4693]: I1212 17:15:12.902493 4693 generic.go:334] "Generic (PLEG): container finished" podID="73559c8b-d017-4a5d-aced-3da25d264b0a" containerID="ecf679915acb4c8bd632299eb618fc9ab601608b029a46cdfa94cef79871b750" exitCode=0 Dec 12 17:15:12 crc kubenswrapper[4693]: I1212 17:15:12.902616 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"73559c8b-d017-4a5d-aced-3da25d264b0a","Type":"ContainerDied","Data":"ecf679915acb4c8bd632299eb618fc9ab601608b029a46cdfa94cef79871b750"} Dec 12 17:15:12 crc kubenswrapper[4693]: I1212 17:15:12.902670 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"73559c8b-d017-4a5d-aced-3da25d264b0a","Type":"ContainerStarted","Data":"22672325e2d9d7e751ee08083b237a18b4ed14a24bc2ed803973524a2ac63a93"} Dec 12 17:15:12 crc kubenswrapper[4693]: I1212 17:15:12.902837 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 17:15:12 crc kubenswrapper[4693]: I1212 17:15:12.906502 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-hfmz9" Dec 12 17:15:12 crc kubenswrapper[4693]: I1212 17:15:12.946939 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-sbjt9" podUID="cabce219-6d9f-4aac-9402-ecf80e930f68" containerName="registry-server" probeResult="failure" output=< Dec 12 17:15:12 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:15:12 crc kubenswrapper[4693]: > Dec 12 17:15:13 crc kubenswrapper[4693]: I1212 17:15:13.522383 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bhqb5" Dec 12 17:15:13 crc kubenswrapper[4693]: I1212 17:15:13.522800 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bhqb5" Dec 12 17:15:13 crc kubenswrapper[4693]: I1212 17:15:13.529136 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zwt44" Dec 12 17:15:14 crc kubenswrapper[4693]: I1212 17:15:14.138551 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w6x8t" Dec 12 17:15:14 crc kubenswrapper[4693]: I1212 17:15:14.474667 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 17:15:14 crc kubenswrapper[4693]: I1212 17:15:14.639819 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bhqb5" podUID="60ada46e-eb41-4339-a653-610721982c81" containerName="registry-server" probeResult="failure" output=< Dec 12 17:15:14 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:15:14 crc kubenswrapper[4693]: > Dec 12 17:15:15 crc kubenswrapper[4693]: I1212 17:15:15.948114 4693 generic.go:334] "Generic (PLEG): container finished" podID="f98101ce-5311-42f6-951c-e0b8dd94641b" containerID="72b0b942a93ffffdbb799c3a665c85b2551dc03de28c51ff400e54dae2f01eaa" exitCode=0 Dec 12 17:15:15 crc kubenswrapper[4693]: I1212 17:15:15.948576 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f98101ce-5311-42f6-951c-e0b8dd94641b","Type":"ContainerDied","Data":"72b0b942a93ffffdbb799c3a665c85b2551dc03de28c51ff400e54dae2f01eaa"} Dec 12 17:15:15 crc kubenswrapper[4693]: I1212 17:15:15.951668 4693 generic.go:334] "Generic (PLEG): container finished" podID="fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b" containerID="4fe49fad2e4e5ff4139c2cad3eeff20628a090db56175e1013c05f052514b9db" exitCode=1 Dec 12 17:15:15 crc kubenswrapper[4693]: I1212 17:15:15.951691 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b","Type":"ContainerDied","Data":"4fe49fad2e4e5ff4139c2cad3eeff20628a090db56175e1013c05f052514b9db"} Dec 12 17:15:16 crc kubenswrapper[4693]: I1212 17:15:16.842727 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-d677b6fd-mw5qj" Dec 12 17:15:16 crc kubenswrapper[4693]: I1212 17:15:16.993199 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f98101ce-5311-42f6-951c-e0b8dd94641b","Type":"ContainerStarted","Data":"0fabf0b5c820ad3c3f2a7c2d042f0f5ad5cbaa648ca9e574288d5f570869d9d0"} Dec 12 17:15:17 crc kubenswrapper[4693]: I1212 17:15:17.344484 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hhs2z" Dec 12 17:15:17 crc kubenswrapper[4693]: I1212 17:15:17.356050 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-wcl2w" Dec 12 17:15:17 crc kubenswrapper[4693]: I1212 17:15:17.542358 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-f2llf" Dec 12 17:15:17 crc kubenswrapper[4693]: I1212 17:15:17.615013 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-f2llf" Dec 12 17:15:17 crc kubenswrapper[4693]: I1212 17:15:17.619365 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 12 17:15:17 crc kubenswrapper[4693]: I1212 17:15:17.709372 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d86dw" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.113263 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-7qqwq" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.120378 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.127648 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-mqxdn" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.182424 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-config-data\") pod \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.183048 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.183173 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-ca-certs\") pod \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.183213 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-openstack-config-secret\") pod \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.183429 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-ssh-key\") pod \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.183463 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-test-operator-ephemeral-workdir\") pod \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.183507 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-test-operator-ephemeral-temporary\") pod \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.183583 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzgbd\" (UniqueName: \"kubernetes.io/projected/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-kube-api-access-dzgbd\") pod \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.183611 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-openstack-config\") pod \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\" (UID: \"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b\") " Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.185171 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b" (UID: "fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.184839 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-config-data" (OuterVolumeSpecName: "config-data") pod "fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b" (UID: "fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.187827 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b" (UID: "fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.193489 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-kube-api-access-dzgbd" (OuterVolumeSpecName: "kube-api-access-dzgbd") pod "fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b" (UID: "fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b"). InnerVolumeSpecName "kube-api-access-dzgbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.198385 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b" (UID: "fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.224015 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b" (UID: "fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.224632 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b" (UID: "fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.257328 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b" (UID: "fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.266187 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b" (UID: "fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.286818 4693 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.286854 4693 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.286864 4693 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.286875 4693 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.286886 4693 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.286898 4693 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.286907 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzgbd\" (UniqueName: \"kubernetes.io/projected/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-kube-api-access-dzgbd\") on node \"crc\" DevicePath \"\"" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.286917 4693 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.286925 4693 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.318723 4693 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.391304 4693 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.833696 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 12 17:15:18 crc kubenswrapper[4693]: I1212 17:15:18.833748 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 12 17:15:19 crc kubenswrapper[4693]: I1212 17:15:19.014877 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b","Type":"ContainerDied","Data":"5ea910e026b7d47ea38f0a9cd65fb4bf21b77b368b7cb054a12214ab21768a3e"} Dec 12 17:15:19 crc kubenswrapper[4693]: I1212 17:15:19.015147 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ea910e026b7d47ea38f0a9cd65fb4bf21b77b368b7cb054a12214ab21768a3e" Dec 12 17:15:19 crc kubenswrapper[4693]: I1212 17:15:19.014922 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 12 17:15:19 crc kubenswrapper[4693]: I1212 17:15:19.167204 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-ndkbb" Dec 12 17:15:20 crc kubenswrapper[4693]: I1212 17:15:20.363537 4693 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 12 17:15:20 crc kubenswrapper[4693]: I1212 17:15:20.365538 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 12 17:15:20 crc kubenswrapper[4693]: I1212 17:15:20.429325 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 12 17:15:20 crc kubenswrapper[4693]: I1212 17:15:20.429666 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 12 17:15:20 crc kubenswrapper[4693]: I1212 17:15:20.516756 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 12 17:15:21 crc kubenswrapper[4693]: I1212 17:15:21.168654 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q5jh6" Dec 12 17:15:21 crc kubenswrapper[4693]: I1212 17:15:21.221675 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q5jh6" Dec 12 17:15:21 crc kubenswrapper[4693]: I1212 17:15:21.918248 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sbjt9" Dec 12 17:15:21 crc kubenswrapper[4693]: I1212 17:15:21.966359 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sbjt9" Dec 12 17:15:22 crc kubenswrapper[4693]: I1212 17:15:22.357503 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:15:22 crc kubenswrapper[4693]: E1212 17:15:22.357787 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:15:24 crc kubenswrapper[4693]: I1212 17:15:24.578441 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bhqb5" podUID="60ada46e-eb41-4339-a653-610721982c81" containerName="registry-server" probeResult="failure" output=< Dec 12 17:15:24 crc kubenswrapper[4693]: timeout: failed to connect service ":50051" within 1s Dec 12 17:15:24 crc kubenswrapper[4693]: > Dec 12 17:15:25 crc kubenswrapper[4693]: I1212 17:15:25.537629 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="f98101ce-5311-42f6-951c-e0b8dd94641b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 17:15:26 crc kubenswrapper[4693]: I1212 17:15:26.432007 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 12 17:15:26 crc kubenswrapper[4693]: E1212 17:15:26.433134 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b" containerName="tempest-tests-tempest-tests-runner" Dec 12 17:15:26 crc kubenswrapper[4693]: I1212 17:15:26.433167 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b" containerName="tempest-tests-tempest-tests-runner" Dec 12 17:15:26 crc kubenswrapper[4693]: E1212 17:15:26.433209 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fa56db-49e1-4138-b485-aaaa14e7ebdd" containerName="extract-content" Dec 12 17:15:26 crc kubenswrapper[4693]: I1212 17:15:26.433218 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fa56db-49e1-4138-b485-aaaa14e7ebdd" containerName="extract-content" Dec 12 17:15:26 crc kubenswrapper[4693]: E1212 17:15:26.433254 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fa56db-49e1-4138-b485-aaaa14e7ebdd" containerName="registry-server" Dec 12 17:15:26 crc kubenswrapper[4693]: I1212 17:15:26.433263 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fa56db-49e1-4138-b485-aaaa14e7ebdd" containerName="registry-server" Dec 12 17:15:26 crc kubenswrapper[4693]: E1212 17:15:26.433411 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fa56db-49e1-4138-b485-aaaa14e7ebdd" containerName="extract-utilities" Dec 12 17:15:26 crc kubenswrapper[4693]: I1212 17:15:26.433428 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fa56db-49e1-4138-b485-aaaa14e7ebdd" containerName="extract-utilities" Dec 12 17:15:26 crc kubenswrapper[4693]: I1212 17:15:26.433734 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa2d51d9-8a13-4b35-846e-3f2e1fa7c64b" containerName="tempest-tests-tempest-tests-runner" Dec 12 17:15:26 crc kubenswrapper[4693]: I1212 17:15:26.433783 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0fa56db-49e1-4138-b485-aaaa14e7ebdd" containerName="registry-server" Dec 12 17:15:26 crc kubenswrapper[4693]: I1212 17:15:26.435532 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 12 17:15:26 crc kubenswrapper[4693]: I1212 17:15:26.442947 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zbjwh" Dec 12 17:15:26 crc kubenswrapper[4693]: I1212 17:15:26.461286 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 12 17:15:26 crc kubenswrapper[4693]: I1212 17:15:26.507863 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bc4aacbd-8963-43c8-a7c8-c26f92adf462\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 12 17:15:26 crc kubenswrapper[4693]: I1212 17:15:26.508360 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbnlr\" (UniqueName: \"kubernetes.io/projected/bc4aacbd-8963-43c8-a7c8-c26f92adf462-kube-api-access-xbnlr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bc4aacbd-8963-43c8-a7c8-c26f92adf462\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 12 17:15:26 crc kubenswrapper[4693]: I1212 17:15:26.611185 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbnlr\" (UniqueName: \"kubernetes.io/projected/bc4aacbd-8963-43c8-a7c8-c26f92adf462-kube-api-access-xbnlr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bc4aacbd-8963-43c8-a7c8-c26f92adf462\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 12 17:15:26 crc kubenswrapper[4693]: I1212 17:15:26.611599 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bc4aacbd-8963-43c8-a7c8-c26f92adf462\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 12 17:15:26 crc kubenswrapper[4693]: I1212 17:15:26.612137 4693 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bc4aacbd-8963-43c8-a7c8-c26f92adf462\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 12 17:15:26 crc kubenswrapper[4693]: I1212 17:15:26.636443 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbnlr\" (UniqueName: \"kubernetes.io/projected/bc4aacbd-8963-43c8-a7c8-c26f92adf462-kube-api-access-xbnlr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bc4aacbd-8963-43c8-a7c8-c26f92adf462\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 12 17:15:26 crc kubenswrapper[4693]: I1212 17:15:26.656368 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bc4aacbd-8963-43c8-a7c8-c26f92adf462\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 12 17:15:26 crc kubenswrapper[4693]: I1212 17:15:26.821830 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 12 17:15:27 crc kubenswrapper[4693]: I1212 17:15:27.406020 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 12 17:15:28 crc kubenswrapper[4693]: I1212 17:15:28.127921 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"bc4aacbd-8963-43c8-a7c8-c26f92adf462","Type":"ContainerStarted","Data":"e2eb858e756f80d49c110ab3e37bea4b40ebfc550ce29f6bd8baff44dd75de4b"} Dec 12 17:15:28 crc kubenswrapper[4693]: I1212 17:15:28.512130 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" podUID="70395fde-23f6-41b0-a04e-c4568b405e9d" containerName="oauth-openshift" containerID="cri-o://1b9d77c1cbf5e73bd221f286b168fe2c4a8d6d3a0409c7e41383af29dd1358b6" gracePeriod=15 Dec 12 17:15:29 crc kubenswrapper[4693]: I1212 17:15:29.141115 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"bc4aacbd-8963-43c8-a7c8-c26f92adf462","Type":"ContainerStarted","Data":"bb410d6e2bfbc6aefe6d11425282c84559b7bfb2aa48b7bb4ea43fd505201496"} Dec 12 17:15:29 crc kubenswrapper[4693]: I1212 17:15:29.143505 4693 generic.go:334] "Generic (PLEG): container finished" podID="70395fde-23f6-41b0-a04e-c4568b405e9d" containerID="1b9d77c1cbf5e73bd221f286b168fe2c4a8d6d3a0409c7e41383af29dd1358b6" exitCode=0 Dec 12 17:15:29 crc kubenswrapper[4693]: I1212 17:15:29.143562 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" event={"ID":"70395fde-23f6-41b0-a04e-c4568b405e9d","Type":"ContainerDied","Data":"1b9d77c1cbf5e73bd221f286b168fe2c4a8d6d3a0409c7e41383af29dd1358b6"} Dec 12 17:15:29 crc kubenswrapper[4693]: I1212 17:15:29.143590 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" event={"ID":"70395fde-23f6-41b0-a04e-c4568b405e9d","Type":"ContainerStarted","Data":"20e10ad1f9b2f859354e799b9deb419b878989e8be9ac25d41bee6d304b22e4f"} Dec 12 17:15:29 crc kubenswrapper[4693]: I1212 17:15:29.144666 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 17:15:29 crc kubenswrapper[4693]: I1212 17:15:29.187445 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.05523652 podStartE2EDuration="3.187307539s" podCreationTimestamp="2025-12-12 17:15:26 +0000 UTC" firstStartedPulling="2025-12-12 17:15:27.470104517 +0000 UTC m=+5354.638744118" lastFinishedPulling="2025-12-12 17:15:28.602175536 +0000 UTC m=+5355.770815137" observedRunningTime="2025-12-12 17:15:29.155146212 +0000 UTC m=+5356.323785813" watchObservedRunningTime="2025-12-12 17:15:29.187307539 +0000 UTC m=+5356.355947140" Dec 12 17:15:29 crc kubenswrapper[4693]: I1212 17:15:29.580154 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-db548d47c-z22tr" Dec 12 17:15:29 crc kubenswrapper[4693]: I1212 17:15:29.945385 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-65dd6d4bcb-h8fs2" podUID="ac10c353-ed34-4f82-ad22-dc0065fbb96e" containerName="console" containerID="cri-o://df2c0ba9a10de9c6284da4af4cfdf8984f720e1a8f7248d0acad338c9985812e" gracePeriod=15 Dec 12 17:15:30 crc kubenswrapper[4693]: I1212 17:15:30.194170 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-65dd6d4bcb-h8fs2_ac10c353-ed34-4f82-ad22-dc0065fbb96e/console/0.log" Dec 12 17:15:30 crc kubenswrapper[4693]: I1212 17:15:30.194241 4693 generic.go:334] "Generic (PLEG): container finished" podID="ac10c353-ed34-4f82-ad22-dc0065fbb96e" containerID="df2c0ba9a10de9c6284da4af4cfdf8984f720e1a8f7248d0acad338c9985812e" exitCode=2 Dec 12 17:15:30 crc kubenswrapper[4693]: I1212 17:15:30.195117 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65dd6d4bcb-h8fs2" event={"ID":"ac10c353-ed34-4f82-ad22-dc0065fbb96e","Type":"ContainerDied","Data":"df2c0ba9a10de9c6284da4af4cfdf8984f720e1a8f7248d0acad338c9985812e"} Dec 12 17:15:30 crc kubenswrapper[4693]: I1212 17:15:30.363061 4693 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 12 17:15:30 crc kubenswrapper[4693]: I1212 17:15:30.363134 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 12 17:15:30 crc kubenswrapper[4693]: I1212 17:15:30.363204 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 17:15:30 crc kubenswrapper[4693]: I1212 17:15:30.364172 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"bd4c6981e10c7955a0d31336fbcb78800f626a4bffb8b076bac18dd4cc3398c3"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Dec 12 17:15:30 crc kubenswrapper[4693]: I1212 17:15:30.364334 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://bd4c6981e10c7955a0d31336fbcb78800f626a4bffb8b076bac18dd4cc3398c3" gracePeriod=30 Dec 12 17:15:30 crc kubenswrapper[4693]: I1212 17:15:30.531570 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="f98101ce-5311-42f6-951c-e0b8dd94641b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 17:15:31 crc kubenswrapper[4693]: I1212 17:15:31.206838 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-65dd6d4bcb-h8fs2_ac10c353-ed34-4f82-ad22-dc0065fbb96e/console/0.log" Dec 12 17:15:31 crc kubenswrapper[4693]: I1212 17:15:31.207486 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65dd6d4bcb-h8fs2" event={"ID":"ac10c353-ed34-4f82-ad22-dc0065fbb96e","Type":"ContainerStarted","Data":"36e3a62d0e20e24fcbe744ecb288c9d70437f6622e37194420f83d17c91ba9c5"} Dec 12 17:15:33 crc kubenswrapper[4693]: I1212 17:15:33.628427 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bhqb5" Dec 12 17:15:33 crc kubenswrapper[4693]: I1212 17:15:33.713677 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bhqb5" Dec 12 17:15:34 crc kubenswrapper[4693]: I1212 17:15:34.357913 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:15:34 crc kubenswrapper[4693]: E1212 17:15:34.358649 4693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wvw2c_openshift-machine-config-operator(71d6bb6b-1211-4bbd-8946-2010438d6a5d)\"" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" Dec 12 17:15:34 crc kubenswrapper[4693]: I1212 17:15:34.388640 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 17:15:34 crc kubenswrapper[4693]: I1212 17:15:34.389593 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 17:15:34 crc kubenswrapper[4693]: I1212 17:15:34.393348 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 17:15:35 crc kubenswrapper[4693]: I1212 17:15:35.273824 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-65dd6d4bcb-h8fs2" Dec 12 17:15:35 crc kubenswrapper[4693]: I1212 17:15:35.538594 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="f98101ce-5311-42f6-951c-e0b8dd94641b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 17:15:36 crc kubenswrapper[4693]: I1212 17:15:36.330919 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7744f9564f-bwttl" Dec 12 17:15:40 crc kubenswrapper[4693]: I1212 17:15:40.547239 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="f98101ce-5311-42f6-951c-e0b8dd94641b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 17:15:45 crc kubenswrapper[4693]: I1212 17:15:45.358202 4693 scope.go:117] "RemoveContainer" containerID="5c191b93347abb88af60143ec847b1021dad26fa30ac0a85fcbf4140c2e838cb" Dec 12 17:15:45 crc kubenswrapper[4693]: I1212 17:15:45.533807 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="f98101ce-5311-42f6-951c-e0b8dd94641b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 17:15:46 crc kubenswrapper[4693]: I1212 17:15:46.442259 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" event={"ID":"71d6bb6b-1211-4bbd-8946-2010438d6a5d","Type":"ContainerStarted","Data":"bf3cb31f108c72e1d370f3f6149548541e9b404d71d3e28bbea30ac9faf042ec"} Dec 12 17:15:50 crc kubenswrapper[4693]: I1212 17:15:50.539829 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="f98101ce-5311-42f6-951c-e0b8dd94641b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 17:15:50 crc kubenswrapper[4693]: I1212 17:15:50.819947 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 17:15:55 crc kubenswrapper[4693]: I1212 17:15:55.556466 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="f98101ce-5311-42f6-951c-e0b8dd94641b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 17:16:00 crc kubenswrapper[4693]: I1212 17:16:00.541067 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="f98101ce-5311-42f6-951c-e0b8dd94641b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 17:16:00 crc kubenswrapper[4693]: I1212 17:16:00.646353 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Dec 12 17:16:00 crc kubenswrapper[4693]: I1212 17:16:00.647760 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 12 17:16:00 crc kubenswrapper[4693]: I1212 17:16:00.649314 4693 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="bd4c6981e10c7955a0d31336fbcb78800f626a4bffb8b076bac18dd4cc3398c3" exitCode=137 Dec 12 17:16:00 crc kubenswrapper[4693]: I1212 17:16:00.649393 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"bd4c6981e10c7955a0d31336fbcb78800f626a4bffb8b076bac18dd4cc3398c3"} Dec 12 17:16:00 crc kubenswrapper[4693]: I1212 17:16:00.649451 4693 scope.go:117] "RemoveContainer" containerID="1adb73c07e53bc378d1cea7ef797f24c5f8be2a84d6833262c2329d35ba64820" Dec 12 17:16:01 crc kubenswrapper[4693]: I1212 17:16:01.668658 4693 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Dec 12 17:16:01 crc kubenswrapper[4693]: I1212 17:16:01.672813 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fa328c4eab355108cd1522e9a5c9b83b523c34cbbb96b6518dd3ede1aa7a0e7c"} Dec 12 17:16:04 crc kubenswrapper[4693]: I1212 17:16:04.474675 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 17:16:05 crc kubenswrapper[4693]: I1212 17:16:05.549603 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="f98101ce-5311-42f6-951c-e0b8dd94641b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 17:16:10 crc kubenswrapper[4693]: I1212 17:16:10.363621 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 17:16:10 crc kubenswrapper[4693]: I1212 17:16:10.370589 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 17:16:10 crc kubenswrapper[4693]: I1212 17:16:10.536306 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="f98101ce-5311-42f6-951c-e0b8dd94641b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 17:16:10 crc kubenswrapper[4693]: I1212 17:16:10.812952 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 17:16:15 crc kubenswrapper[4693]: I1212 17:16:15.535841 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="f98101ce-5311-42f6-951c-e0b8dd94641b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 17:16:20 crc kubenswrapper[4693]: I1212 17:16:20.545694 4693 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="f98101ce-5311-42f6-951c-e0b8dd94641b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 17:16:20 crc kubenswrapper[4693]: I1212 17:16:20.546541 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 12 17:16:20 crc kubenswrapper[4693]: I1212 17:16:20.548686 4693 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"0fabf0b5c820ad3c3f2a7c2d042f0f5ad5cbaa648ca9e574288d5f570869d9d0"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed startup probe, will be restarted" Dec 12 17:16:20 crc kubenswrapper[4693]: I1212 17:16:20.548881 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f98101ce-5311-42f6-951c-e0b8dd94641b" containerName="cinder-scheduler" containerID="cri-o://0fabf0b5c820ad3c3f2a7c2d042f0f5ad5cbaa648ca9e574288d5f570869d9d0" gracePeriod=30 Dec 12 17:16:22 crc kubenswrapper[4693]: I1212 17:16:22.153482 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425995-4shpw"] Dec 12 17:16:22 crc kubenswrapper[4693]: I1212 17:16:22.158635 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425995-4shpw" Dec 12 17:16:22 crc kubenswrapper[4693]: I1212 17:16:22.171721 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425995-4shpw"] Dec 12 17:16:22 crc kubenswrapper[4693]: I1212 17:16:22.175452 4693 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 12 17:16:22 crc kubenswrapper[4693]: I1212 17:16:22.180353 4693 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 12 17:16:22 crc kubenswrapper[4693]: I1212 17:16:22.228007 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bca3698-9e73-4f9a-be62-25dbdcedd60b-secret-volume\") pod \"collect-profiles-29425995-4shpw\" (UID: \"6bca3698-9e73-4f9a-be62-25dbdcedd60b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425995-4shpw" Dec 12 17:16:22 crc kubenswrapper[4693]: I1212 17:16:22.228077 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bca3698-9e73-4f9a-be62-25dbdcedd60b-config-volume\") pod \"collect-profiles-29425995-4shpw\" (UID: \"6bca3698-9e73-4f9a-be62-25dbdcedd60b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425995-4shpw" Dec 12 17:16:22 crc kubenswrapper[4693]: I1212 17:16:22.228116 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7vwj\" (UniqueName: \"kubernetes.io/projected/6bca3698-9e73-4f9a-be62-25dbdcedd60b-kube-api-access-m7vwj\") pod \"collect-profiles-29425995-4shpw\" (UID: \"6bca3698-9e73-4f9a-be62-25dbdcedd60b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425995-4shpw" Dec 12 17:16:22 crc kubenswrapper[4693]: I1212 17:16:22.331846 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bca3698-9e73-4f9a-be62-25dbdcedd60b-secret-volume\") pod \"collect-profiles-29425995-4shpw\" (UID: \"6bca3698-9e73-4f9a-be62-25dbdcedd60b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425995-4shpw" Dec 12 17:16:22 crc kubenswrapper[4693]: I1212 17:16:22.332178 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bca3698-9e73-4f9a-be62-25dbdcedd60b-config-volume\") pod \"collect-profiles-29425995-4shpw\" (UID: \"6bca3698-9e73-4f9a-be62-25dbdcedd60b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425995-4shpw" Dec 12 17:16:22 crc kubenswrapper[4693]: I1212 17:16:22.332332 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7vwj\" (UniqueName: \"kubernetes.io/projected/6bca3698-9e73-4f9a-be62-25dbdcedd60b-kube-api-access-m7vwj\") pod \"collect-profiles-29425995-4shpw\" (UID: \"6bca3698-9e73-4f9a-be62-25dbdcedd60b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425995-4shpw" Dec 12 17:16:22 crc kubenswrapper[4693]: I1212 17:16:22.348605 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bca3698-9e73-4f9a-be62-25dbdcedd60b-config-volume\") pod \"collect-profiles-29425995-4shpw\" (UID: \"6bca3698-9e73-4f9a-be62-25dbdcedd60b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425995-4shpw" Dec 12 17:16:22 crc kubenswrapper[4693]: I1212 17:16:22.380145 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bca3698-9e73-4f9a-be62-25dbdcedd60b-secret-volume\") pod \"collect-profiles-29425995-4shpw\" (UID: \"6bca3698-9e73-4f9a-be62-25dbdcedd60b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425995-4shpw" Dec 12 17:16:22 crc kubenswrapper[4693]: I1212 17:16:22.401189 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7vwj\" (UniqueName: \"kubernetes.io/projected/6bca3698-9e73-4f9a-be62-25dbdcedd60b-kube-api-access-m7vwj\") pod \"collect-profiles-29425995-4shpw\" (UID: \"6bca3698-9e73-4f9a-be62-25dbdcedd60b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425995-4shpw" Dec 12 17:16:22 crc kubenswrapper[4693]: I1212 17:16:22.526480 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425995-4shpw" Dec 12 17:16:23 crc kubenswrapper[4693]: E1212 17:16:23.285006 4693 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.204:38568->38.102.83.204:43805: write tcp 38.102.83.204:38568->38.102.83.204:43805: write: broken pipe Dec 12 17:16:23 crc kubenswrapper[4693]: I1212 17:16:23.348061 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425995-4shpw"] Dec 12 17:16:23 crc kubenswrapper[4693]: I1212 17:16:23.988175 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425995-4shpw" event={"ID":"6bca3698-9e73-4f9a-be62-25dbdcedd60b","Type":"ContainerStarted","Data":"e6f1e337254ad248ddec04b37e6b79a874f972ed9c17ad2627fe9dba191dfe22"} Dec 12 17:16:23 crc kubenswrapper[4693]: I1212 17:16:23.988451 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425995-4shpw" event={"ID":"6bca3698-9e73-4f9a-be62-25dbdcedd60b","Type":"ContainerStarted","Data":"b803e0c6dc48aefa2fddb7a4af8f84e0e977093dc742d6948236efc947ad6cd5"} Dec 12 17:16:24 crc kubenswrapper[4693]: I1212 17:16:24.003378 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29425995-4shpw" podStartSLOduration=2.003359772 podStartE2EDuration="2.003359772s" podCreationTimestamp="2025-12-12 17:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:16:24.002621042 +0000 UTC m=+5411.171260643" watchObservedRunningTime="2025-12-12 17:16:24.003359772 +0000 UTC m=+5411.171999373" Dec 12 17:16:24 crc kubenswrapper[4693]: I1212 17:16:24.999396 4693 generic.go:334] "Generic (PLEG): container finished" podID="6bca3698-9e73-4f9a-be62-25dbdcedd60b" containerID="e6f1e337254ad248ddec04b37e6b79a874f972ed9c17ad2627fe9dba191dfe22" exitCode=0 Dec 12 17:16:24 crc kubenswrapper[4693]: I1212 17:16:24.999496 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425995-4shpw" event={"ID":"6bca3698-9e73-4f9a-be62-25dbdcedd60b","Type":"ContainerDied","Data":"e6f1e337254ad248ddec04b37e6b79a874f972ed9c17ad2627fe9dba191dfe22"} Dec 12 17:16:25 crc kubenswrapper[4693]: I1212 17:16:25.590583 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 12 17:16:25 crc kubenswrapper[4693]: I1212 17:16:25.635121 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 12 17:16:25 crc kubenswrapper[4693]: I1212 17:16:25.709431 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 12 17:16:25 crc kubenswrapper[4693]: I1212 17:16:25.742110 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 12 17:16:26 crc kubenswrapper[4693]: I1212 17:16:26.450755 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425995-4shpw" Dec 12 17:16:26 crc kubenswrapper[4693]: I1212 17:16:26.596452 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bca3698-9e73-4f9a-be62-25dbdcedd60b-secret-volume\") pod \"6bca3698-9e73-4f9a-be62-25dbdcedd60b\" (UID: \"6bca3698-9e73-4f9a-be62-25dbdcedd60b\") " Dec 12 17:16:26 crc kubenswrapper[4693]: I1212 17:16:26.596795 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bca3698-9e73-4f9a-be62-25dbdcedd60b-config-volume\") pod \"6bca3698-9e73-4f9a-be62-25dbdcedd60b\" (UID: \"6bca3698-9e73-4f9a-be62-25dbdcedd60b\") " Dec 12 17:16:26 crc kubenswrapper[4693]: I1212 17:16:26.596984 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7vwj\" (UniqueName: \"kubernetes.io/projected/6bca3698-9e73-4f9a-be62-25dbdcedd60b-kube-api-access-m7vwj\") pod \"6bca3698-9e73-4f9a-be62-25dbdcedd60b\" (UID: \"6bca3698-9e73-4f9a-be62-25dbdcedd60b\") " Dec 12 17:16:26 crc kubenswrapper[4693]: I1212 17:16:26.597619 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bca3698-9e73-4f9a-be62-25dbdcedd60b-config-volume" (OuterVolumeSpecName: "config-volume") pod "6bca3698-9e73-4f9a-be62-25dbdcedd60b" (UID: "6bca3698-9e73-4f9a-be62-25dbdcedd60b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 17:16:26 crc kubenswrapper[4693]: I1212 17:16:26.604801 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bca3698-9e73-4f9a-be62-25dbdcedd60b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6bca3698-9e73-4f9a-be62-25dbdcedd60b" (UID: "6bca3698-9e73-4f9a-be62-25dbdcedd60b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 17:16:26 crc kubenswrapper[4693]: I1212 17:16:26.604871 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bca3698-9e73-4f9a-be62-25dbdcedd60b-kube-api-access-m7vwj" (OuterVolumeSpecName: "kube-api-access-m7vwj") pod "6bca3698-9e73-4f9a-be62-25dbdcedd60b" (UID: "6bca3698-9e73-4f9a-be62-25dbdcedd60b"). InnerVolumeSpecName "kube-api-access-m7vwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 17:16:26 crc kubenswrapper[4693]: I1212 17:16:26.699861 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7vwj\" (UniqueName: \"kubernetes.io/projected/6bca3698-9e73-4f9a-be62-25dbdcedd60b-kube-api-access-m7vwj\") on node \"crc\" DevicePath \"\"" Dec 12 17:16:26 crc kubenswrapper[4693]: I1212 17:16:26.699894 4693 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bca3698-9e73-4f9a-be62-25dbdcedd60b-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 17:16:26 crc kubenswrapper[4693]: I1212 17:16:26.699906 4693 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bca3698-9e73-4f9a-be62-25dbdcedd60b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 17:16:27 crc kubenswrapper[4693]: I1212 17:16:27.024099 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425995-4shpw" event={"ID":"6bca3698-9e73-4f9a-be62-25dbdcedd60b","Type":"ContainerDied","Data":"b803e0c6dc48aefa2fddb7a4af8f84e0e977093dc742d6948236efc947ad6cd5"} Dec 12 17:16:27 crc kubenswrapper[4693]: I1212 17:16:27.024190 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425995-4shpw" Dec 12 17:16:27 crc kubenswrapper[4693]: I1212 17:16:27.024380 4693 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b803e0c6dc48aefa2fddb7a4af8f84e0e977093dc742d6948236efc947ad6cd5" Dec 12 17:16:27 crc kubenswrapper[4693]: I1212 17:16:27.981911 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425950-gk77t"] Dec 12 17:16:27 crc kubenswrapper[4693]: I1212 17:16:27.991337 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425950-gk77t"] Dec 12 17:16:29 crc kubenswrapper[4693]: I1212 17:16:29.374429 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b042caa-be5f-4c07-9a47-2e434e76d777" path="/var/lib/kubelet/pods/4b042caa-be5f-4c07-9a47-2e434e76d777/volumes" Dec 12 17:16:51 crc kubenswrapper[4693]: I1212 17:16:51.372359 4693 generic.go:334] "Generic (PLEG): container finished" podID="f98101ce-5311-42f6-951c-e0b8dd94641b" containerID="0fabf0b5c820ad3c3f2a7c2d042f0f5ad5cbaa648ca9e574288d5f570869d9d0" exitCode=137 Dec 12 17:16:51 crc kubenswrapper[4693]: I1212 17:16:51.372491 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f98101ce-5311-42f6-951c-e0b8dd94641b","Type":"ContainerDied","Data":"0fabf0b5c820ad3c3f2a7c2d042f0f5ad5cbaa648ca9e574288d5f570869d9d0"} Dec 12 17:16:51 crc kubenswrapper[4693]: I1212 17:16:51.372838 4693 scope.go:117] "RemoveContainer" containerID="72b0b942a93ffffdbb799c3a665c85b2551dc03de28c51ff400e54dae2f01eaa" Dec 12 17:16:52 crc kubenswrapper[4693]: I1212 17:16:52.389504 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f98101ce-5311-42f6-951c-e0b8dd94641b","Type":"ContainerStarted","Data":"40a905ee52a05c91e5d765a6a26ffffcff1f895782d1c307a7d3c3db4c7be4d8"} Dec 12 17:16:55 crc kubenswrapper[4693]: I1212 17:16:55.516940 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 12 17:17:00 crc kubenswrapper[4693]: I1212 17:17:00.573883 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 12 17:17:18 crc kubenswrapper[4693]: I1212 17:17:18.822260 4693 generic.go:334] "Generic (PLEG): container finished" podID="cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0" containerID="5915a36368dfe5a7b12fb280ceb2ddc12c9a9dadd30ab5b7207f97204a71c11e" exitCode=0 Dec 12 17:17:18 crc kubenswrapper[4693]: I1212 17:17:18.822348 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" event={"ID":"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0","Type":"ContainerDied","Data":"5915a36368dfe5a7b12fb280ceb2ddc12c9a9dadd30ab5b7207f97204a71c11e"} Dec 12 17:17:19 crc kubenswrapper[4693]: I1212 17:17:19.841245 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" event={"ID":"cbb8e58d-1fc6-4b66-82b3-ad43d71c4ce0","Type":"ContainerStarted","Data":"a83f14d04642aedf1a5ef882799d10f093f504f9165eac2b3aa0e13038ec4a23"} Dec 12 17:17:23 crc kubenswrapper[4693]: I1212 17:17:23.450484 4693 scope.go:117] "RemoveContainer" containerID="5a7c42971a2e9583e866f36edc8b2c2b7f81f8a20d2c9c90160b8e937b5826a8" Dec 12 17:17:25 crc kubenswrapper[4693]: I1212 17:17:25.487611 4693 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cnkp7"] Dec 12 17:17:25 crc kubenswrapper[4693]: E1212 17:17:25.489933 4693 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bca3698-9e73-4f9a-be62-25dbdcedd60b" containerName="collect-profiles" Dec 12 17:17:25 crc kubenswrapper[4693]: I1212 17:17:25.489964 4693 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bca3698-9e73-4f9a-be62-25dbdcedd60b" containerName="collect-profiles" Dec 12 17:17:25 crc kubenswrapper[4693]: I1212 17:17:25.490394 4693 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bca3698-9e73-4f9a-be62-25dbdcedd60b" containerName="collect-profiles" Dec 12 17:17:25 crc kubenswrapper[4693]: I1212 17:17:25.492669 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cnkp7" Dec 12 17:17:25 crc kubenswrapper[4693]: I1212 17:17:25.498830 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cnkp7"] Dec 12 17:17:25 crc kubenswrapper[4693]: I1212 17:17:25.559209 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e04abc3-fd61-403b-ac07-aadbae0119b7-catalog-content\") pod \"certified-operators-cnkp7\" (UID: \"1e04abc3-fd61-403b-ac07-aadbae0119b7\") " pod="openshift-marketplace/certified-operators-cnkp7" Dec 12 17:17:25 crc kubenswrapper[4693]: I1212 17:17:25.559760 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvxst\" (UniqueName: \"kubernetes.io/projected/1e04abc3-fd61-403b-ac07-aadbae0119b7-kube-api-access-mvxst\") pod \"certified-operators-cnkp7\" (UID: \"1e04abc3-fd61-403b-ac07-aadbae0119b7\") " pod="openshift-marketplace/certified-operators-cnkp7" Dec 12 17:17:25 crc kubenswrapper[4693]: I1212 17:17:25.559857 4693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e04abc3-fd61-403b-ac07-aadbae0119b7-utilities\") pod \"certified-operators-cnkp7\" (UID: \"1e04abc3-fd61-403b-ac07-aadbae0119b7\") " pod="openshift-marketplace/certified-operators-cnkp7" Dec 12 17:17:25 crc kubenswrapper[4693]: I1212 17:17:25.661928 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvxst\" (UniqueName: \"kubernetes.io/projected/1e04abc3-fd61-403b-ac07-aadbae0119b7-kube-api-access-mvxst\") pod \"certified-operators-cnkp7\" (UID: \"1e04abc3-fd61-403b-ac07-aadbae0119b7\") " pod="openshift-marketplace/certified-operators-cnkp7" Dec 12 17:17:25 crc kubenswrapper[4693]: I1212 17:17:25.661983 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e04abc3-fd61-403b-ac07-aadbae0119b7-utilities\") pod \"certified-operators-cnkp7\" (UID: \"1e04abc3-fd61-403b-ac07-aadbae0119b7\") " pod="openshift-marketplace/certified-operators-cnkp7" Dec 12 17:17:25 crc kubenswrapper[4693]: I1212 17:17:25.662080 4693 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e04abc3-fd61-403b-ac07-aadbae0119b7-catalog-content\") pod \"certified-operators-cnkp7\" (UID: \"1e04abc3-fd61-403b-ac07-aadbae0119b7\") " pod="openshift-marketplace/certified-operators-cnkp7" Dec 12 17:17:25 crc kubenswrapper[4693]: I1212 17:17:25.662679 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e04abc3-fd61-403b-ac07-aadbae0119b7-catalog-content\") pod \"certified-operators-cnkp7\" (UID: \"1e04abc3-fd61-403b-ac07-aadbae0119b7\") " pod="openshift-marketplace/certified-operators-cnkp7" Dec 12 17:17:25 crc kubenswrapper[4693]: I1212 17:17:25.662773 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e04abc3-fd61-403b-ac07-aadbae0119b7-utilities\") pod \"certified-operators-cnkp7\" (UID: \"1e04abc3-fd61-403b-ac07-aadbae0119b7\") " pod="openshift-marketplace/certified-operators-cnkp7" Dec 12 17:17:25 crc kubenswrapper[4693]: I1212 17:17:25.694833 4693 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvxst\" (UniqueName: \"kubernetes.io/projected/1e04abc3-fd61-403b-ac07-aadbae0119b7-kube-api-access-mvxst\") pod \"certified-operators-cnkp7\" (UID: \"1e04abc3-fd61-403b-ac07-aadbae0119b7\") " pod="openshift-marketplace/certified-operators-cnkp7" Dec 12 17:17:25 crc kubenswrapper[4693]: I1212 17:17:25.821581 4693 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cnkp7" Dec 12 17:17:26 crc kubenswrapper[4693]: I1212 17:17:26.374163 4693 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cnkp7"] Dec 12 17:17:26 crc kubenswrapper[4693]: I1212 17:17:26.948721 4693 generic.go:334] "Generic (PLEG): container finished" podID="1e04abc3-fd61-403b-ac07-aadbae0119b7" containerID="4b7166f56df118ea857ff5d9b2433177b38f3b8ad12c8e8d59044cb3380e6e57" exitCode=0 Dec 12 17:17:26 crc kubenswrapper[4693]: I1212 17:17:26.949141 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnkp7" event={"ID":"1e04abc3-fd61-403b-ac07-aadbae0119b7","Type":"ContainerDied","Data":"4b7166f56df118ea857ff5d9b2433177b38f3b8ad12c8e8d59044cb3380e6e57"} Dec 12 17:17:26 crc kubenswrapper[4693]: I1212 17:17:26.949188 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnkp7" event={"ID":"1e04abc3-fd61-403b-ac07-aadbae0119b7","Type":"ContainerStarted","Data":"77cba8abd48cd55742e54c59c52d401e8efc5d9cbc7314d7745b779b5676c2be"} Dec 12 17:17:26 crc kubenswrapper[4693]: I1212 17:17:26.955695 4693 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 17:17:27 crc kubenswrapper[4693]: I1212 17:17:27.378078 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 17:17:27 crc kubenswrapper[4693]: I1212 17:17:27.378179 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 17:17:27 crc kubenswrapper[4693]: I1212 17:17:27.970961 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnkp7" event={"ID":"1e04abc3-fd61-403b-ac07-aadbae0119b7","Type":"ContainerStarted","Data":"54c4458d06cefbf061bf6bc7398f871bf9ee67cc17b46788bb86e7c2b2f5331b"} Dec 12 17:17:29 crc kubenswrapper[4693]: I1212 17:17:29.998533 4693 generic.go:334] "Generic (PLEG): container finished" podID="1e04abc3-fd61-403b-ac07-aadbae0119b7" containerID="54c4458d06cefbf061bf6bc7398f871bf9ee67cc17b46788bb86e7c2b2f5331b" exitCode=0 Dec 12 17:17:29 crc kubenswrapper[4693]: I1212 17:17:29.998607 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnkp7" event={"ID":"1e04abc3-fd61-403b-ac07-aadbae0119b7","Type":"ContainerDied","Data":"54c4458d06cefbf061bf6bc7398f871bf9ee67cc17b46788bb86e7c2b2f5331b"} Dec 12 17:17:31 crc kubenswrapper[4693]: I1212 17:17:31.012083 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnkp7" event={"ID":"1e04abc3-fd61-403b-ac07-aadbae0119b7","Type":"ContainerStarted","Data":"5a9c1a69ffa61b950e3aa9c837fcd54bb77dc192958638d8414deab952872d62"} Dec 12 17:17:31 crc kubenswrapper[4693]: I1212 17:17:31.036331 4693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cnkp7" podStartSLOduration=2.5180950539999998 podStartE2EDuration="6.036305349s" podCreationTimestamp="2025-12-12 17:17:25 +0000 UTC" firstStartedPulling="2025-12-12 17:17:26.953849887 +0000 UTC m=+5474.122489518" lastFinishedPulling="2025-12-12 17:17:30.472060222 +0000 UTC m=+5477.640699813" observedRunningTime="2025-12-12 17:17:31.030665258 +0000 UTC m=+5478.199304859" watchObservedRunningTime="2025-12-12 17:17:31.036305349 +0000 UTC m=+5478.204944960" Dec 12 17:17:35 crc kubenswrapper[4693]: I1212 17:17:35.822187 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cnkp7" Dec 12 17:17:35 crc kubenswrapper[4693]: I1212 17:17:35.822983 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cnkp7" Dec 12 17:17:35 crc kubenswrapper[4693]: I1212 17:17:35.906135 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cnkp7" Dec 12 17:17:36 crc kubenswrapper[4693]: I1212 17:17:36.126612 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cnkp7" Dec 12 17:17:38 crc kubenswrapper[4693]: I1212 17:17:38.484107 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cnkp7"] Dec 12 17:17:38 crc kubenswrapper[4693]: I1212 17:17:38.484876 4693 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cnkp7" podUID="1e04abc3-fd61-403b-ac07-aadbae0119b7" containerName="registry-server" containerID="cri-o://5a9c1a69ffa61b950e3aa9c837fcd54bb77dc192958638d8414deab952872d62" gracePeriod=2 Dec 12 17:17:39 crc kubenswrapper[4693]: I1212 17:17:39.117743 4693 generic.go:334] "Generic (PLEG): container finished" podID="1e04abc3-fd61-403b-ac07-aadbae0119b7" containerID="5a9c1a69ffa61b950e3aa9c837fcd54bb77dc192958638d8414deab952872d62" exitCode=0 Dec 12 17:17:39 crc kubenswrapper[4693]: I1212 17:17:39.118125 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnkp7" event={"ID":"1e04abc3-fd61-403b-ac07-aadbae0119b7","Type":"ContainerDied","Data":"5a9c1a69ffa61b950e3aa9c837fcd54bb77dc192958638d8414deab952872d62"} Dec 12 17:17:39 crc kubenswrapper[4693]: I1212 17:17:39.274232 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cnkp7" Dec 12 17:17:39 crc kubenswrapper[4693]: I1212 17:17:39.430824 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e04abc3-fd61-403b-ac07-aadbae0119b7-utilities\") pod \"1e04abc3-fd61-403b-ac07-aadbae0119b7\" (UID: \"1e04abc3-fd61-403b-ac07-aadbae0119b7\") " Dec 12 17:17:39 crc kubenswrapper[4693]: I1212 17:17:39.431000 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvxst\" (UniqueName: \"kubernetes.io/projected/1e04abc3-fd61-403b-ac07-aadbae0119b7-kube-api-access-mvxst\") pod \"1e04abc3-fd61-403b-ac07-aadbae0119b7\" (UID: \"1e04abc3-fd61-403b-ac07-aadbae0119b7\") " Dec 12 17:17:39 crc kubenswrapper[4693]: I1212 17:17:39.431067 4693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e04abc3-fd61-403b-ac07-aadbae0119b7-catalog-content\") pod \"1e04abc3-fd61-403b-ac07-aadbae0119b7\" (UID: \"1e04abc3-fd61-403b-ac07-aadbae0119b7\") " Dec 12 17:17:39 crc kubenswrapper[4693]: I1212 17:17:39.431584 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e04abc3-fd61-403b-ac07-aadbae0119b7-utilities" (OuterVolumeSpecName: "utilities") pod "1e04abc3-fd61-403b-ac07-aadbae0119b7" (UID: "1e04abc3-fd61-403b-ac07-aadbae0119b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 17:17:39 crc kubenswrapper[4693]: I1212 17:17:39.432109 4693 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e04abc3-fd61-403b-ac07-aadbae0119b7-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 17:17:39 crc kubenswrapper[4693]: I1212 17:17:39.439081 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e04abc3-fd61-403b-ac07-aadbae0119b7-kube-api-access-mvxst" (OuterVolumeSpecName: "kube-api-access-mvxst") pod "1e04abc3-fd61-403b-ac07-aadbae0119b7" (UID: "1e04abc3-fd61-403b-ac07-aadbae0119b7"). InnerVolumeSpecName "kube-api-access-mvxst". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 17:17:39 crc kubenswrapper[4693]: I1212 17:17:39.500751 4693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e04abc3-fd61-403b-ac07-aadbae0119b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e04abc3-fd61-403b-ac07-aadbae0119b7" (UID: "1e04abc3-fd61-403b-ac07-aadbae0119b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 17:17:39 crc kubenswrapper[4693]: I1212 17:17:39.535070 4693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvxst\" (UniqueName: \"kubernetes.io/projected/1e04abc3-fd61-403b-ac07-aadbae0119b7-kube-api-access-mvxst\") on node \"crc\" DevicePath \"\"" Dec 12 17:17:39 crc kubenswrapper[4693]: I1212 17:17:39.535108 4693 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e04abc3-fd61-403b-ac07-aadbae0119b7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 17:17:40 crc kubenswrapper[4693]: I1212 17:17:40.137887 4693 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnkp7" event={"ID":"1e04abc3-fd61-403b-ac07-aadbae0119b7","Type":"ContainerDied","Data":"77cba8abd48cd55742e54c59c52d401e8efc5d9cbc7314d7745b779b5676c2be"} Dec 12 17:17:40 crc kubenswrapper[4693]: I1212 17:17:40.138237 4693 scope.go:117] "RemoveContainer" containerID="5a9c1a69ffa61b950e3aa9c837fcd54bb77dc192958638d8414deab952872d62" Dec 12 17:17:40 crc kubenswrapper[4693]: I1212 17:17:40.138470 4693 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cnkp7" Dec 12 17:17:40 crc kubenswrapper[4693]: I1212 17:17:40.190597 4693 scope.go:117] "RemoveContainer" containerID="54c4458d06cefbf061bf6bc7398f871bf9ee67cc17b46788bb86e7c2b2f5331b" Dec 12 17:17:40 crc kubenswrapper[4693]: I1212 17:17:40.191533 4693 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cnkp7"] Dec 12 17:17:40 crc kubenswrapper[4693]: I1212 17:17:40.217132 4693 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cnkp7"] Dec 12 17:17:40 crc kubenswrapper[4693]: I1212 17:17:40.227379 4693 scope.go:117] "RemoveContainer" containerID="4b7166f56df118ea857ff5d9b2433177b38f3b8ad12c8e8d59044cb3380e6e57" Dec 12 17:17:41 crc kubenswrapper[4693]: I1212 17:17:41.371787 4693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e04abc3-fd61-403b-ac07-aadbae0119b7" path="/var/lib/kubelet/pods/1e04abc3-fd61-403b-ac07-aadbae0119b7/volumes" Dec 12 17:17:47 crc kubenswrapper[4693]: I1212 17:17:47.397594 4693 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 17:17:47 crc kubenswrapper[4693]: I1212 17:17:47.405557 4693 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-66fcfb545d-whswt" Dec 12 17:18:12 crc kubenswrapper[4693]: I1212 17:18:12.530771 4693 patch_prober.go:28] interesting pod/machine-config-daemon-wvw2c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 17:18:12 crc kubenswrapper[4693]: I1212 17:18:12.531338 4693 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wvw2c" podUID="71d6bb6b-1211-4bbd-8946-2010438d6a5d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"